aboutsummaryrefslogtreecommitdiffstats
path: root/lib
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2025-07-31 14:57:54 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2025-07-31 14:57:54 -0700
commitbeace86e61e465dba204a268ab3f3377153a4973 (patch)
tree24f90cb26bf39eb7724326cdf3e8bffed7c05e50 /lib
parentMerge tag 'mtd/for-6.17' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd... (diff)
parentMAINTAINERS: add missing headers to mempory policy & migration section (diff)
downloadlinux-beace86e61e465dba204a268ab3f3377153a4973.tar.gz
linux-beace86e61e465dba204a268ab3f3377153a4973.zip
Merge tag 'mm-stable-2025-07-30-15-25' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull MM updates from Andrew Morton: "As usual, many cleanups. The below blurbiage describes 42 patchsets. 21 of those are partially or fully cleanup work. "cleans up", "cleanup", "maintainability", "rationalizes", etc. I never knew the MM code was so dirty. "mm: ksm: prevent KSM from breaking merging of new VMAs" (Lorenzo Stoakes) addresses an issue with KSM's PR_SET_MEMORY_MERGE mode: newly mapped VMAs were not eligible for merging with existing adjacent VMAs. "mm/damon: introduce DAMON_STAT for simple and practical access monitoring" (SeongJae Park) adds a new kernel module which simplifies the setup and usage of DAMON in production environments. "stop passing a writeback_control to swap/shmem writeout" (Christoph Hellwig) is a cleanup to the writeback code which removes a couple of pointers from struct writeback_control. "drivers/base/node.c: optimization and cleanups" (Donet Tom) contains largely uncorrelated cleanups to the NUMA node setup and management code. "mm: userfaultfd: assorted fixes and cleanups" (Tal Zussman) does some maintenance work on the userfaultfd code. "Readahead tweaks for larger folios" (Ryan Roberts) implements some tuneups for pagecache readahead when it is reading into order>0 folios. "selftests/mm: Tweaks to the cow test" (Mark Brown) provides some cleanups and consistency improvements to the selftests code. "Optimize mremap() for large folios" (Dev Jain) does that. A 37% reduction in execution time was measured in a memset+mremap+munmap microbenchmark. "Remove zero_user()" (Matthew Wilcox) expunges zero_user() in favor of the more modern memzero_page(). "mm/huge_memory: vmf_insert_folio_*() and vmf_insert_pfn_pud() fixes" (David Hildenbrand) addresses some warts which David noticed in the huge page code. These were not known to be causing any issues at this time. "mm/damon: use alloc_migrate_target() for DAMOS_MIGRATE_{HOT,COLD" (SeongJae Park) provides some cleanup and consolidation work in DAMON. "use vm_flags_t consistently" (Lorenzo Stoakes) uses vm_flags_t in places where we were inappropriately using other types. "mm/memfd: Reserve hugetlb folios before allocation" (Vivek Kasireddy) increases the reliability of large page allocation in the memfd code. "mm: Remove pXX_devmap page table bit and pfn_t type" (Alistair Popple) removes several now-unneeded PFN_* flags. "mm/damon: decouple sysfs from core" (SeongJae Park) implememnts some cleanup and maintainability work in the DAMON sysfs layer. "madvise cleanup" (Lorenzo Stoakes) does quite a lot of cleanup/maintenance work in the madvise() code. "madvise anon_name cleanups" (Vlastimil Babka) provides additional cleanups on top or Lorenzo's effort. "Implement numa node notifier" (Oscar Salvador) creates a standalone notifier for NUMA node memory state changes. Previously these were lumped under the more general memory on/offline notifier. "Make MIGRATE_ISOLATE a standalone bit" (Zi Yan) cleans up the pageblock isolation code and fixes a potential issue which doesn't seem to cause any problems in practice. "selftests/damon: add python and drgn based DAMON sysfs functionality tests" (SeongJae Park) adds additional drgn- and python-based DAMON selftests which are more comprehensive than the existing selftest suite. "Misc rework on hugetlb faulting path" (Oscar Salvador) fixes a rather obscure deadlock in the hugetlb fault code and follows that fix with a series of cleanups. "cma: factor out allocation logic from __cma_declare_contiguous_nid" (Mike Rapoport) rationalizes and cleans up the highmem-specific code in the CMA allocator. "mm/migration: rework movable_ops page migration (part 1)" (David Hildenbrand) provides cleanups and future-preparedness to the migration code. "mm/damon: add trace events for auto-tuned monitoring intervals and DAMOS quota" (SeongJae Park) adds some tracepoints to some DAMON auto-tuning code. "mm/damon: fix misc bugs in DAMON modules" (SeongJae Park) does that. "mm/damon: misc cleanups" (SeongJae Park) also does what it claims. "mm: folio_pte_batch() improvements" (David Hildenbrand) cleans up the large folio PTE batching code. "mm/damon/vaddr: Allow interleaving in migrate_{hot,cold} actions" (SeongJae Park) facilitates dynamic alteration of DAMON's inter-node allocation policy. "Remove unmap_and_put_page()" (Vishal Moola) provides a couple of page->folio conversions. "mm: per-node proactive reclaim" (Davidlohr Bueso) implements a per-node control of proactive reclaim - beyond the current memcg-based implementation. "mm/damon: remove damon_callback" (SeongJae Park) replaces the damon_callback interface with a more general and powerful damon_call()+damos_walk() interface. "mm/mremap: permit mremap() move of multiple VMAs" (Lorenzo Stoakes) implements a number of mremap cleanups (of course) in preparation for adding new mremap() functionality: newly permit the remapping of multiple VMAs when the user is specifying MREMAP_FIXED. It still excludes some specialized situations where this cannot be performed reliably. "drop hugetlb_free_pgd_range()" (Anthony Yznaga) switches some sparc hugetlb code over to the generic version and removes the thus-unneeded hugetlb_free_pgd_range(). "mm/damon/sysfs: support periodic and automated stats update" (SeongJae Park) augments the present userspace-requested update of DAMON sysfs monitoring files. Automatic update is now provided, along with a tunable to control the update interval. "Some randome fixes and cleanups to swapfile" (Kemeng Shi) does what is claims. "mm: introduce snapshot_page" (Luiz Capitulino and David Hildenbrand) provides (and uses) a means by which debug-style functions can grab a copy of a pageframe and inspect it locklessly without tripping over the races inherent in operating on the live pageframe directly. "use per-vma locks for /proc/pid/maps reads" (Suren Baghdasaryan) addresses the large contention issues which can be triggered by reads from that procfs file. Latencies are reduced by more than half in some situations. The series also introduces several new selftests for the /proc/pid/maps interface. "__folio_split() clean up" (Zi Yan) cleans up __folio_split()! "Optimize mprotect() for large folios" (Dev Jain) provides some quite large (>3x) speedups to mprotect() when dealing with large folios. "selftests/mm: reuse FORCE_READ to replace "asm volatile("" : "+r" (XXX));" and some cleanup" (wang lian) does some cleanup work in the selftests code. "tools/testing: expand mremap testing" (Lorenzo Stoakes) extends the mremap() selftest in several ways, including adding more checking of Lorenzo's recently added "permit mremap() move of multiple VMAs" feature. "selftests/damon/sysfs.py: test all parameters" (SeongJae Park) extends the DAMON sysfs interface selftest so that it tests all possible user-requested parameters. Rather than the present minimal subset" * tag 'mm-stable-2025-07-30-15-25' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (370 commits) MAINTAINERS: add missing headers to mempory policy & migration section MAINTAINERS: add missing file to cgroup section MAINTAINERS: add MM MISC section, add missing files to MISC and CORE MAINTAINERS: add missing zsmalloc file MAINTAINERS: add missing files to page alloc section MAINTAINERS: add missing shrinker files MAINTAINERS: move memremap.[ch] to hotplug section MAINTAINERS: add missing mm_slot.h file THP section MAINTAINERS: add missing interval_tree.c to memory mapping section MAINTAINERS: add missing percpu-internal.h file to per-cpu section mm/page_alloc: remove trace_mm_alloc_contig_migrate_range_info() selftests/damon: introduce _common.sh to host shared function selftests/damon/sysfs.py: test runtime reduction of DAMON parameters selftests/damon/sysfs.py: test non-default parameters runtime commit selftests/damon/sysfs.py: generalize DAMON context commit assertion selftests/damon/sysfs.py: generalize monitoring attributes commit assertion selftests/damon/sysfs.py: generalize DAMOS schemes commit assertion selftests/damon/sysfs.py: test DAMOS filters commitment selftests/damon/sysfs.py: generalize DAMOS scheme commit assertion selftests/damon/sysfs.py: test DAMOS destinations commitment ...
Diffstat (limited to 'lib')
-rw-r--r--lib/alloc_tag.c31
-rw-r--r--lib/codetag.c17
-rw-r--r--lib/maple_tree.c40
-rw-r--r--lib/test_hmm.c14
-rw-r--r--lib/test_maple_tree.c32
-rw-r--r--lib/test_vmalloc.c42
-rw-r--r--lib/xarray.c3
7 files changed, 121 insertions, 58 deletions
diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
index 0142bc916f73..e9b33848700a 100644
--- a/lib/alloc_tag.c
+++ b/lib/alloc_tag.c
@@ -25,8 +25,10 @@ static bool mem_profiling_support;
static struct codetag_type *alloc_tag_cttype;
+#ifdef CONFIG_ARCH_MODULE_NEEDS_WEAK_PER_CPU
DEFINE_PER_CPU(struct alloc_tag_counters, _shared_alloc_tag);
EXPORT_SYMBOL(_shared_alloc_tag);
+#endif
DEFINE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
mem_alloc_profiling_key);
@@ -46,21 +48,16 @@ struct allocinfo_private {
static void *allocinfo_start(struct seq_file *m, loff_t *pos)
{
struct allocinfo_private *priv;
- struct codetag *ct;
loff_t node = *pos;
- priv = kzalloc(sizeof(*priv), GFP_KERNEL);
- m->private = priv;
- if (!priv)
- return NULL;
-
- priv->print_header = (node == 0);
+ priv = (struct allocinfo_private *)m->private;
codetag_lock_module_list(alloc_tag_cttype, true);
- priv->iter = codetag_get_ct_iter(alloc_tag_cttype);
- while ((ct = codetag_next_ct(&priv->iter)) != NULL && node)
- node--;
-
- return ct ? priv : NULL;
+ if (node == 0) {
+ priv->print_header = true;
+ priv->iter = codetag_get_ct_iter(alloc_tag_cttype);
+ codetag_next_ct(&priv->iter);
+ }
+ return priv->iter.ct ? priv : NULL;
}
static void *allocinfo_next(struct seq_file *m, void *arg, loff_t *pos)
@@ -77,12 +74,7 @@ static void *allocinfo_next(struct seq_file *m, void *arg, loff_t *pos)
static void allocinfo_stop(struct seq_file *m, void *arg)
{
- struct allocinfo_private *priv = (struct allocinfo_private *)m->private;
-
- if (priv) {
- codetag_lock_module_list(alloc_tag_cttype, false);
- kfree(priv);
- }
+ codetag_lock_module_list(alloc_tag_cttype, false);
}
static void print_allocinfo_header(struct seq_buf *buf)
@@ -820,7 +812,8 @@ static int __init alloc_tag_init(void)
return 0;
}
- if (!proc_create_seq(ALLOCINFO_FILE_NAME, 0400, NULL, &allocinfo_seq_op)) {
+ if (!proc_create_seq_private(ALLOCINFO_FILE_NAME, 0400, NULL, &allocinfo_seq_op,
+ sizeof(struct allocinfo_private), NULL)) {
pr_err("Failed to create %s file\n", ALLOCINFO_FILE_NAME);
shutdown_mem_profiling(false);
return -ENOMEM;
diff --git a/lib/codetag.c b/lib/codetag.c
index 650d54d7e14d..545911cebd25 100644
--- a/lib/codetag.c
+++ b/lib/codetag.c
@@ -11,8 +11,14 @@ struct codetag_type {
struct list_head link;
unsigned int count;
struct idr mod_idr;
- struct rw_semaphore mod_lock; /* protects mod_idr */
+ /*
+ * protects mod_idr, next_mod_seq,
+ * iter->mod_seq and cmod->mod_seq
+ */
+ struct rw_semaphore mod_lock;
struct codetag_type_desc desc;
+ /* generates unique sequence number for module load */
+ unsigned long next_mod_seq;
};
struct codetag_range {
@@ -23,6 +29,7 @@ struct codetag_range {
struct codetag_module {
struct module *mod;
struct codetag_range range;
+ unsigned long mod_seq;
};
static DEFINE_MUTEX(codetag_lock);
@@ -48,6 +55,7 @@ struct codetag_iterator codetag_get_ct_iter(struct codetag_type *cttype)
.cmod = NULL,
.mod_id = 0,
.ct = NULL,
+ .mod_seq = 0,
};
return iter;
@@ -91,11 +99,13 @@ struct codetag *codetag_next_ct(struct codetag_iterator *iter)
if (!cmod)
break;
- if (cmod != iter->cmod) {
+ if (!iter->cmod || iter->mod_seq != cmod->mod_seq) {
iter->cmod = cmod;
+ iter->mod_seq = cmod->mod_seq;
ct = get_first_module_ct(cmod);
- } else
+ } else {
ct = get_next_module_ct(iter);
+ }
if (ct)
break;
@@ -191,6 +201,7 @@ static int codetag_module_init(struct codetag_type *cttype, struct module *mod)
cmod->range = range;
down_write(&cttype->mod_lock);
+ cmod->mod_seq = ++cttype->next_mod_seq;
mod_id = idr_alloc(&cttype->mod_idr, cmod, 0, 0, GFP_KERNEL);
if (mod_id >= 0) {
if (cttype->desc.module_load) {
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index ef66be963798..b4ee2d29d7a9 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -1053,7 +1053,7 @@ static inline void mte_set_gap(const struct maple_enode *mn,
* mas_ascend() - Walk up a level of the tree.
* @mas: The maple state
*
- * Sets the @mas->max and @mas->min to the correct values when walking up. This
+ * Sets the @mas->max and @mas->min for the parent node of mas->node. This
* may cause several levels of walking up to find the correct min and max.
* May find a dead node which will cause a premature return.
* Return: 1 on dead node, 0 otherwise
@@ -1098,6 +1098,12 @@ static int mas_ascend(struct ma_state *mas)
min = 0;
max = ULONG_MAX;
+
+ /*
+ * !mas->offset implies that parent node min == mas->min.
+ * mas->offset > 0 implies that we need to walk up to find the
+ * implied pivot min.
+ */
if (!mas->offset) {
min = mas->min;
set_min = true;
@@ -4560,15 +4566,12 @@ again:
if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
goto retry;
-
if (likely(entry))
return entry;
if (!empty) {
- if (mas->index <= min) {
- mas->status = ma_underflow;
- return NULL;
- }
+ if (mas->index <= min)
+ goto underflow;
goto again;
}
@@ -4930,7 +4933,7 @@ void *mas_walk(struct ma_state *mas)
{
void *entry;
- if (!mas_is_active(mas) || !mas_is_start(mas))
+ if (!mas_is_active(mas) && !mas_is_start(mas))
mas->status = ma_start;
retry:
entry = mas_state_walk(mas);
@@ -5659,6 +5662,17 @@ int mas_expected_entries(struct ma_state *mas, unsigned long nr_entries)
}
EXPORT_SYMBOL_GPL(mas_expected_entries);
+static void mas_may_activate(struct ma_state *mas)
+{
+ if (!mas->node) {
+ mas->status = ma_start;
+ } else if (mas->index > mas->max || mas->index < mas->min) {
+ mas->status = ma_start;
+ } else {
+ mas->status = ma_active;
+ }
+}
+
static bool mas_next_setup(struct ma_state *mas, unsigned long max,
void **entry)
{
@@ -5682,11 +5696,11 @@ static bool mas_next_setup(struct ma_state *mas, unsigned long max,
break;
case ma_overflow:
/* Overflowed before, but the max changed */
- mas->status = ma_active;
+ mas_may_activate(mas);
break;
case ma_underflow:
/* The user expects the mas to be one before where it is */
- mas->status = ma_active;
+ mas_may_activate(mas);
*entry = mas_walk(mas);
if (*entry)
return true;
@@ -5807,11 +5821,11 @@ static bool mas_prev_setup(struct ma_state *mas, unsigned long min, void **entry
break;
case ma_underflow:
/* underflowed before but the min changed */
- mas->status = ma_active;
+ mas_may_activate(mas);
break;
case ma_overflow:
/* User expects mas to be one after where it is */
- mas->status = ma_active;
+ mas_may_activate(mas);
*entry = mas_walk(mas);
if (*entry)
return true;
@@ -5976,7 +5990,7 @@ static __always_inline bool mas_find_setup(struct ma_state *mas, unsigned long m
return true;
}
- mas->status = ma_active;
+ mas_may_activate(mas);
*entry = mas_walk(mas);
if (*entry)
return true;
@@ -5985,7 +5999,7 @@ static __always_inline bool mas_find_setup(struct ma_state *mas, unsigned long m
if (unlikely(mas->last >= max))
return true;
- mas->status = ma_active;
+ mas_may_activate(mas);
*entry = mas_walk(mas);
if (*entry)
return true;
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 5b144bc5c4ec..761725bc713c 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -330,7 +330,7 @@ static int dmirror_fault(struct dmirror *dmirror, unsigned long start,
{
struct mm_struct *mm = dmirror->notifier.mm;
unsigned long addr;
- unsigned long pfns[64];
+ unsigned long pfns[32];
struct hmm_range range = {
.notifier = &dmirror->notifier,
.hmm_pfns = pfns,
@@ -879,8 +879,8 @@ static int dmirror_migrate_to_system(struct dmirror *dmirror,
unsigned long size = cmd->npages << PAGE_SHIFT;
struct mm_struct *mm = dmirror->notifier.mm;
struct vm_area_struct *vma;
- unsigned long src_pfns[64] = { 0 };
- unsigned long dst_pfns[64] = { 0 };
+ unsigned long src_pfns[32] = { 0 };
+ unsigned long dst_pfns[32] = { 0 };
struct migrate_vma args = { 0 };
unsigned long next;
int ret;
@@ -939,8 +939,8 @@ static int dmirror_migrate_to_device(struct dmirror *dmirror,
unsigned long size = cmd->npages << PAGE_SHIFT;
struct mm_struct *mm = dmirror->notifier.mm;
struct vm_area_struct *vma;
- unsigned long src_pfns[64] = { 0 };
- unsigned long dst_pfns[64] = { 0 };
+ unsigned long src_pfns[32] = { 0 };
+ unsigned long dst_pfns[32] = { 0 };
struct dmirror_bounce bounce;
struct migrate_vma args = { 0 };
unsigned long next;
@@ -1144,8 +1144,8 @@ static int dmirror_snapshot(struct dmirror *dmirror,
unsigned long size = cmd->npages << PAGE_SHIFT;
unsigned long addr;
unsigned long next;
- unsigned long pfns[64];
- unsigned char perm[64];
+ unsigned long pfns[32];
+ unsigned char perm[32];
char __user *uptr;
struct hmm_range range = {
.hmm_pfns = pfns,
diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index 13e2a10d7554..cb3936595b0d 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -3177,6 +3177,7 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
void *entry, *ptr = (void *) 0x1234500;
void *ptr2 = &ptr;
void *ptr3 = &ptr2;
+ unsigned long index;
/* Check MAS_ROOT First */
mtree_store_range(mt, 0, 0, ptr, GFP_KERNEL);
@@ -3707,6 +3708,37 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
MT_BUG_ON(mt, !mas_is_active(&mas));
mas_unlock(&mas);
+ mtree_destroy(mt);
+
+ mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE);
+ mas_lock(&mas);
+ for (int count = 0; count < 30; count++) {
+ mas_set(&mas, count);
+ mas_store_gfp(&mas, xa_mk_value(count), GFP_KERNEL);
+ }
+
+ /* Ensure mas_find works with MA_UNDERFLOW */
+ mas_set(&mas, 0);
+ entry = mas_walk(&mas);
+ mas_set(&mas, 0);
+ mas_prev(&mas, 0);
+ MT_BUG_ON(mt, mas.status != ma_underflow);
+ MT_BUG_ON(mt, mas_find(&mas, ULONG_MAX) != entry);
+
+ /* Restore active on mas_next */
+ entry = mas_next(&mas, ULONG_MAX);
+ index = mas.index;
+ mas_prev(&mas, index);
+ MT_BUG_ON(mt, mas.status != ma_underflow);
+ MT_BUG_ON(mt, mas_next(&mas, ULONG_MAX) != entry);
+
+ /* Ensure overflow -> active works */
+ mas_prev(&mas, 0);
+ mas_next(&mas, index - 1);
+ MT_BUG_ON(mt, mas.status != ma_overflow);
+ MT_BUG_ON(mt, mas_next(&mas, ULONG_MAX) != entry);
+
+ mas_unlock(&mas);
}
static noinline void __init alloc_cyclic_testing(struct maple_tree *mt)
diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c
index 1b0b59549aaf..2815658ccc37 100644
--- a/lib/test_vmalloc.c
+++ b/lib/test_vmalloc.c
@@ -41,7 +41,7 @@ __param(int, nr_pages, 0,
__param(bool, use_huge, false,
"Use vmalloc_huge in fix_size_alloc_test");
-__param(int, run_test_mask, INT_MAX,
+__param(int, run_test_mask, 7,
"Set tests specified in the mask.\n\n"
"\t\tid: 1, name: fix_size_alloc_test\n"
"\t\tid: 2, name: full_fit_alloc_test\n"
@@ -396,25 +396,27 @@ cleanup:
struct test_case_desc {
const char *test_name;
int (*test_func)(void);
+ bool xfail;
};
static struct test_case_desc test_case_array[] = {
- { "fix_size_alloc_test", fix_size_alloc_test },
- { "full_fit_alloc_test", full_fit_alloc_test },
- { "long_busy_list_alloc_test", long_busy_list_alloc_test },
- { "random_size_alloc_test", random_size_alloc_test },
- { "fix_align_alloc_test", fix_align_alloc_test },
- { "random_size_align_alloc_test", random_size_align_alloc_test },
- { "align_shift_alloc_test", align_shift_alloc_test },
- { "pcpu_alloc_test", pcpu_alloc_test },
- { "kvfree_rcu_1_arg_vmalloc_test", kvfree_rcu_1_arg_vmalloc_test },
- { "kvfree_rcu_2_arg_vmalloc_test", kvfree_rcu_2_arg_vmalloc_test },
- { "vm_map_ram_test", vm_map_ram_test },
+ { "fix_size_alloc_test", fix_size_alloc_test, },
+ { "full_fit_alloc_test", full_fit_alloc_test, },
+ { "long_busy_list_alloc_test", long_busy_list_alloc_test, },
+ { "random_size_alloc_test", random_size_alloc_test, },
+ { "fix_align_alloc_test", fix_align_alloc_test, },
+ { "random_size_align_alloc_test", random_size_align_alloc_test, },
+ { "align_shift_alloc_test", align_shift_alloc_test, true },
+ { "pcpu_alloc_test", pcpu_alloc_test, },
+ { "kvfree_rcu_1_arg_vmalloc_test", kvfree_rcu_1_arg_vmalloc_test, },
+ { "kvfree_rcu_2_arg_vmalloc_test", kvfree_rcu_2_arg_vmalloc_test, },
+ { "vm_map_ram_test", vm_map_ram_test, },
/* Add a new test case here. */
};
struct test_case_data {
int test_failed;
+ int test_xfailed;
int test_passed;
u64 time;
};
@@ -444,7 +446,7 @@ static int test_func(void *private)
{
struct test_driver *t = private;
int random_array[ARRAY_SIZE(test_case_array)];
- int index, i, j;
+ int index, i, j, ret;
ktime_t kt;
u64 delta;
@@ -468,11 +470,14 @@ static int test_func(void *private)
*/
if (!((run_test_mask & (1 << index)) >> index))
continue;
-
kt = ktime_get();
for (j = 0; j < test_repeat_count; j++) {
- if (!test_case_array[index].test_func())
+ ret = test_case_array[index].test_func();
+
+ if (!ret && !test_case_array[index].xfail)
t->data[index].test_passed++;
+ else if (ret && test_case_array[index].xfail)
+ t->data[index].test_xfailed++;
else
t->data[index].test_failed++;
}
@@ -576,10 +581,11 @@ static void do_concurrent_test(void)
continue;
pr_info(
- "Summary: %s passed: %d failed: %d repeat: %d loops: %d avg: %llu usec\n",
+ "Summary: %s passed: %d failed: %d xfailed: %d repeat: %d loops: %d avg: %llu usec\n",
test_case_array[j].test_name,
t->data[j].test_passed,
t->data[j].test_failed,
+ t->data[j].test_xfailed,
test_repeat_count, test_loop_count,
t->data[j].time);
}
@@ -598,7 +604,11 @@ static int __init vmalloc_test_init(void)
return IS_BUILTIN(CONFIG_TEST_VMALLOC) ? 0:-EAGAIN;
}
+#ifdef MODULE
module_init(vmalloc_test_init)
+#else
+late_initcall(vmalloc_test_init);
+#endif
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Uladzislau Rezki");
diff --git a/lib/xarray.c b/lib/xarray.c
index 76dde3a1cacf..ae3d80f4b4ee 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -1910,6 +1910,7 @@ EXPORT_SYMBOL(xa_store_range);
* @xas: XArray operation state.
*
* Called after xas_load, the xas should not be in an error state.
+ * The xas should not be pointing to a sibling entry.
*
* Return: A number between 0 and 63 indicating the order of the entry.
*/
@@ -1920,6 +1921,8 @@ int xas_get_order(struct xa_state *xas)
if (!xas->xa_node)
return 0;
+ XA_NODE_BUG_ON(xas->xa_node, xa_is_sibling(xa_entry(xas->xa,
+ xas->xa_node, xas->xa_offset)));
for (;;) {
unsigned int slot = xas->xa_offset + (1 << order);