diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-04-27 16:36:55 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-04-27 16:36:55 -0700 |
| commit | b6a7828502dc769e1a5329027bc5048222fa210a (patch) | |
| tree | 60418229584831505036bd2d368320b7387e7b3a /kernel/module/dups.c | |
| parent | cdx: fix build failure due to sysfs 'bus_type' argument needing to be const (diff) | |
| parent | module: add debugging auto-load duplicate module support (diff) | |
| download | linux-b6a7828502dc769e1a5329027bc5048222fa210a.tar.gz linux-b6a7828502dc769e1a5329027bc5048222fa210a.zip | |
Merge tag 'modules-6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux
Pull module updates from Luis Chamberlain:
"The summary of the changes for this pull requests is:
- Song Liu's new struct module_memory replacement
- Nick Alcock's MODULE_LICENSE() removal for non-modules
- My cleanups and enhancements to reduce the areas where we vmalloc
module memory for duplicates, and the respective debug code which
proves the remaining vmalloc pressure comes from userspace.
Most of the changes have been in linux-next for quite some time except
the minor fixes I made to check if a module was already loaded prior
to allocating the final module memory with vmalloc and the respective
debug code it introduces to help clarify the issue. Although the
functional change is small it is rather safe as it can only *help*
reduce vmalloc space for duplicates and is confirmed to fix a bootup
issue with over 400 CPUs with KASAN enabled. I don't expect stable
kernels to pick up that fix as the cleanups would have also had to
have been picked up. Folks on larger CPU systems with modules will
want to just upgrade if vmalloc space has been an issue on bootup.
Given the size of this request, here's some more elaborate details:
The functional change change in this pull request is the very first
patch from Song Liu which replaces the 'struct module_layout' with a
new 'struct module_memory'. The old data structure tried to put
together all types of supported module memory types in one data
structure, the new one abstracts the differences in memory types in a
module to allow each one to provide their own set of details. This
paves the way in the future so we can deal with them in a cleaner way.
If you look at changes they also provide a nice cleanup of how we
handle these different memory areas in a module. This change has been
in linux-next since before the merge window opened for v6.3 so to
provide more than a full kernel cycle of testing. It's a good thing as
quite a bit of fixes have been found for it.
Jason Baron then made dynamic debug a first class citizen module user
by using module notifier callbacks to allocate / remove module
specific dynamic debug information.
Nick Alcock has done quite a bit of work cross-tree to remove module
license tags from things which cannot possibly be module at my request
so to:
a) help him with his longer term tooling goals which require a
deterministic evaluation if a piece a symbol code could ever be
part of a module or not. But quite recently it is has been made
clear that tooling is not the only one that would benefit.
Disambiguating symbols also helps efforts such as live patching,
kprobes and BPF, but for other reasons and R&D on this area is
active with no clear solution in sight.
b) help us inch closer to the now generally accepted long term goal
of automating all the MODULE_LICENSE() tags from SPDX license tags
In so far as a) is concerned, although module license tags are a no-op
for non-modules, tools which would want create a mapping of possible
modules can only rely on the module license tag after the commit
8b41fc4454e ("kbuild: create modules.builtin without
Makefile.modbuiltin or tristate.conf").
Nick has been working on this *for years* and AFAICT I was the only
one to suggest two alternatives to this approach for tooling. The
complexity in one of my suggested approaches lies in that we'd need a
possible-obj-m and a could-be-module which would check if the object
being built is part of any kconfig build which could ever lead to it
being part of a module, and if so define a new define
-DPOSSIBLE_MODULE [0].
A more obvious yet theoretical approach I've suggested would be to
have a tristate in kconfig imply the same new -DPOSSIBLE_MODULE as
well but that means getting kconfig symbol names mapping to modules
always, and I don't think that's the case today. I am not aware of
Nick or anyone exploring either of these options. Quite recently Josh
Poimboeuf has pointed out that live patching, kprobes and BPF would
benefit from resolving some part of the disambiguation as well but for
other reasons. The function granularity KASLR (fgkaslr) patches were
mentioned but Joe Lawrence has clarified this effort has been dropped
with no clear solution in sight [1].
In the meantime removing module license tags from code which could
never be modules is welcomed for both objectives mentioned above. Some
developers have also welcomed these changes as it has helped clarify
when a module was never possible and they forgot to clean this up, and
so you'll see quite a bit of Nick's patches in other pull requests for
this merge window. I just picked up the stragglers after rc3. LWN has
good coverage on the motivation behind this work [2] and the typical
cross-tree issues he ran into along the way. The only concrete blocker
issue he ran into was that we should not remove the MODULE_LICENSE()
tags from files which have no SPDX tags yet, even if they can never be
modules. Nick ended up giving up on his efforts due to having to do
this vetting and backlash he ran into from folks who really did *not
understand* the core of the issue nor were providing any alternative /
guidance. I've gone through his changes and dropped the patches which
dropped the module license tags where an SPDX license tag was missing,
it only consisted of 11 drivers. To see if a pull request deals with a
file which lacks SPDX tags you can just use:
./scripts/spdxcheck.py -f \
$(git diff --name-only commid-id | xargs echo)
You'll see a core module file in this pull request for the above, but
that's not related to his changes. WE just need to add the SPDX
license tag for the kernel/module/kmod.c file in the future but it
demonstrates the effectiveness of the script.
Most of Nick's changes were spread out through different trees, and I
just picked up the slack after rc3 for the last kernel was out. Those
changes have been in linux-next for over two weeks.
The cleanups, debug code I added and final fix I added for modules
were motivated by David Hildenbrand's report of boot failing on a
systems with over 400 CPUs when KASAN was enabled due to running out
of virtual memory space. Although the functional change only consists
of 3 lines in the patch "module: avoid allocation if module is already
present and ready", proving that this was the best we can do on the
modules side took quite a bit of effort and new debug code.
The initial cleanups I did on the modules side of things has been in
linux-next since around rc3 of the last kernel, the actual final fix
for and debug code however have only been in linux-next for about a
week or so but I think it is worth getting that code in for this merge
window as it does help fix / prove / evaluate the issues reported with
larger number of CPUs. Userspace is not yet fixed as it is taking a
bit of time for folks to understand the crux of the issue and find a
proper resolution. Worst come to worst, I have a kludge-of-concept [3]
of how to make kernel_read*() calls for modules unique / converge
them, but I'm currently inclined to just see if userspace can fix this
instead"
Link: https://lore.kernel.org/all/Y/kXDqW+7d71C4wz@bombadil.infradead.org/ [0]
Link: https://lkml.kernel.org/r/025f2151-ce7c-5630-9b90-98742c97ac65@redhat.com [1]
Link: https://lwn.net/Articles/927569/ [2]
Link: https://lkml.kernel.org/r/20230414052840.1994456-3-mcgrof@kernel.org [3]
* tag 'modules-6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux: (121 commits)
module: add debugging auto-load duplicate module support
module: stats: fix invalid_mod_bytes typo
module: remove use of uninitialized variable len
module: fix building stats for 32-bit targets
module: stats: include uapi/linux/module.h
module: avoid allocation if module is already present and ready
module: add debug stats to help identify memory pressure
module: extract patient module check into helper
modules/kmod: replace implementation with a semaphore
Change DEFINE_SEMAPHORE() to take a number argument
module: fix kmemleak annotations for non init ELF sections
module: Ignore L0 and rename is_arm_mapping_symbol()
module: Move is_arm_mapping_symbol() to module_symbol.h
module: Sync code of is_arm_mapping_symbol()
scripts/gdb: use mem instead of core_layout to get the module address
interconnect: remove module-related code
interconnect: remove MODULE_LICENSE in non-modules
zswap: remove MODULE_LICENSE in non-modules
zpool: remove MODULE_LICENSE in non-modules
x86/mm/dump_pagetables: remove MODULE_LICENSE in non-modules
...
Diffstat (limited to 'kernel/module/dups.c')
| -rw-r--r-- | kernel/module/dups.c | 246 |
1 files changed, 246 insertions, 0 deletions
diff --git a/kernel/module/dups.c b/kernel/module/dups.c new file mode 100644 index 000000000000..aa8e1361fdb5 --- /dev/null +++ b/kernel/module/dups.c @@ -0,0 +1,246 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * kmod dups - the kernel module autoloader duplicate suppressor + * + * Copyright (C) 2023 Luis Chamberlain <mcgrof@kernel.org> + */ + +#define pr_fmt(fmt) "module: " fmt + +#include <linux/module.h> +#include <linux/sched.h> +#include <linux/sched/task.h> +#include <linux/binfmts.h> +#include <linux/syscalls.h> +#include <linux/unistd.h> +#include <linux/kmod.h> +#include <linux/slab.h> +#include <linux/completion.h> +#include <linux/cred.h> +#include <linux/file.h> +#include <linux/fdtable.h> +#include <linux/workqueue.h> +#include <linux/security.h> +#include <linux/mount.h> +#include <linux/kernel.h> +#include <linux/init.h> +#include <linux/resource.h> +#include <linux/notifier.h> +#include <linux/suspend.h> +#include <linux/rwsem.h> +#include <linux/ptrace.h> +#include <linux/async.h> +#include <linux/uaccess.h> + +#undef MODULE_PARAM_PREFIX +#define MODULE_PARAM_PREFIX "module." +static bool enable_dups_trace = IS_ENABLED(CONFIG_MODULE_DEBUG_AUTOLOAD_DUPS_TRACE); +module_param(enable_dups_trace, bool_enable_only, 0644); + +/* + * Protects dup_kmod_reqs list, adds / removals with RCU. + */ +static DEFINE_MUTEX(kmod_dup_mutex); +static LIST_HEAD(dup_kmod_reqs); + +struct kmod_dup_req { + struct list_head list; + char name[MODULE_NAME_LEN]; + struct completion first_req_done; + struct work_struct complete_work; + struct delayed_work delete_work; + int dup_ret; +}; + +static struct kmod_dup_req *kmod_dup_request_lookup(char *module_name) +{ + struct kmod_dup_req *kmod_req; + + list_for_each_entry_rcu(kmod_req, &dup_kmod_reqs, list, + lockdep_is_held(&kmod_dup_mutex)) { + if (strlen(kmod_req->name) == strlen(module_name) && + !memcmp(kmod_req->name, module_name, strlen(module_name))) { + return kmod_req; + } + } + + return NULL; +} + +static void kmod_dup_request_delete(struct work_struct *work) +{ + struct kmod_dup_req *kmod_req; + kmod_req = container_of(to_delayed_work(work), struct kmod_dup_req, delete_work); + + /* + * The typical situation is a module successully loaded. In that + * situation the module will be present already in userspace. If + * new requests come in after that, userspace will already know the + * module is loaded so will just return 0 right away. There is still + * a small chance right after we delete this entry new request_module() + * calls may happen after that, they can happen. These heuristics + * are to protect finit_module() abuse for auto-loading, if modules + * are still tryign to auto-load even if a module is already loaded, + * that's on them, and those inneficiencies should not be fixed by + * kmod. The inneficies there are a call to modprobe and modprobe + * just returning 0. + */ + mutex_lock(&kmod_dup_mutex); + list_del_rcu(&kmod_req->list); + synchronize_rcu(); + mutex_unlock(&kmod_dup_mutex); + kfree(kmod_req); +} + +static void kmod_dup_request_complete(struct work_struct *work) +{ + struct kmod_dup_req *kmod_req; + + kmod_req = container_of(work, struct kmod_dup_req, complete_work); + + /* + * This will ensure that the kernel will let all the waiters get + * informed its time to check the return value. It's time to + * go home. + */ + complete_all(&kmod_req->first_req_done); + + /* + * Now that we have allowed prior request_module() calls to go on + * with life, let's schedule deleting this entry. We don't have + * to do it right away, but we *eventually* want to do it so to not + * let this linger forever as this is just a boot optimization for + * possible abuses of vmalloc() incurred by finit_module() thrashing. + */ + queue_delayed_work(system_wq, &kmod_req->delete_work, 60 * HZ); +} + +bool kmod_dup_request_exists_wait(char *module_name, bool wait, int *dup_ret) +{ + struct kmod_dup_req *kmod_req, *new_kmod_req; + int ret; + + /* + * Pre-allocate the entry in case we have to use it later + * to avoid contention with the mutex. + */ + new_kmod_req = kzalloc(sizeof(*new_kmod_req), GFP_KERNEL); + if (!new_kmod_req) + return false; + + memcpy(new_kmod_req->name, module_name, strlen(module_name)); + INIT_WORK(&new_kmod_req->complete_work, kmod_dup_request_complete); + INIT_DELAYED_WORK(&new_kmod_req->delete_work, kmod_dup_request_delete); + init_completion(&new_kmod_req->first_req_done); + + mutex_lock(&kmod_dup_mutex); + + kmod_req = kmod_dup_request_lookup(module_name); + if (!kmod_req) { + /* + * If the first request that came through for a module + * was with request_module_nowait() we cannot wait for it + * and share its return value with other users which may + * have used request_module() and need a proper return value + * so just skip using them as an anchor. + * + * If a prior request to this one came through with + * request_module() though, then a request_module_nowait() + * would benefit from duplicate detection. + */ + if (!wait) { + kfree(new_kmod_req); + pr_debug("New request_module_nowait() for %s -- cannot track duplicates for this request\n", module_name); + mutex_unlock(&kmod_dup_mutex); + return false; + } + + /* + * There was no duplicate, just add the request so we can + * keep tab on duplicates later. + */ + pr_debug("New request_module() for %s\n", module_name); + list_add_rcu(&new_kmod_req->list, &dup_kmod_reqs); + mutex_unlock(&kmod_dup_mutex); + return false; + } + mutex_unlock(&kmod_dup_mutex); + + /* We are dealing with a duplicate request now */ + kfree(new_kmod_req); + + /* + * To fix these try to use try_then_request_module() instead as that + * will check if the component you are looking for is present or not. + * You could also just queue a single request to load the module once, + * instead of having each and everything you need try to request for + * the module. + * + * Duplicate request_module() calls can cause quite a bit of wasted + * vmalloc() space when racing with userspace. + */ + if (enable_dups_trace) + WARN(1, "module-autoload: duplicate request for module %s\n", module_name); + else + pr_warn("module-autoload: duplicate request for module %s\n", module_name); + + if (!wait) { + /* + * If request_module_nowait() was used then the user just + * wanted to issue the request and if another module request + * was already its way with the same name we don't care for + * the return value either. Let duplicate request_module_nowait() + * calls bail out right away. + */ + *dup_ret = 0; + return true; + } + + /* + * If a duplicate request_module() was used they *may* care for + * the return value, so we have no other option but to wait for + * the first caller to complete. If the first caller used + * the request_module_nowait() call, subsquent callers will + * deal with the comprmise of getting a successful call with this + * optimization enabled ... + */ + ret = wait_for_completion_state(&kmod_req->first_req_done, + TASK_UNINTERRUPTIBLE | TASK_KILLABLE); + if (ret) { + *dup_ret = ret; + return true; + } + + /* Now the duplicate request has the same exact return value as the first request */ + *dup_ret = kmod_req->dup_ret; + + return true; +} + +void kmod_dup_request_announce(char *module_name, int ret) +{ + struct kmod_dup_req *kmod_req; + + mutex_lock(&kmod_dup_mutex); + + kmod_req = kmod_dup_request_lookup(module_name); + if (!kmod_req) + goto out; + + kmod_req->dup_ret = ret; + + /* + * If we complete() here we may allow duplicate threads + * to continue before the first one that submitted the + * request. We're in no rush also, given that each and + * every bounce back to userspace is slow we avoid that + * with a slight delay here. So queueue up the completion + * and let duplicates suffer, just wait a tad bit longer. + * There is no rush. But we also don't want to hold the + * caller up forever or introduce any boot delays. + */ + queue_work(system_wq, &kmod_req->complete_work); + +out: + mutex_unlock(&kmod_dup_mutex); +} |
