diff options
| author | Heiko Carstens <hca@linux.ibm.com> | 2026-01-17 15:53:26 +0100 |
|---|---|---|
| committer | Heiko Carstens <hca@linux.ibm.com> | 2026-01-17 15:53:26 +0100 |
| commit | 86302ddf20e6b27ef463006c8bde6fd753056101 (patch) | |
| tree | 0333bbd118d452d31e0e833efe9a9d9e9902ff3e /Documentation/features/debug/stackprotector | |
| parent | 12ea976f955cefb06eeae4c9e5eb48d08038ccb2 (diff) | |
| parent | 48b4790f054994d4df6d1025ec9267b19618f0ec (diff) | |
| download | linux-86302ddf20e6b27ef463006c8bde6fd753056101.tar.gz linux-86302ddf20e6b27ef463006c8bde6fd753056101.zip | |
Merge branch 'preempt'
Heiko Carstens says:
====================
The option to select PREEMPT_NONE will go away for all architectures which
support PREEMPT_LAZY [1]. Until now all distributions provide kernels built
with PREEMPT_NONE enabled for s390. In particular this means that all
preempt_disable() / preempt_enable() pairs are optimized away during
compile time.
With PREEMPT_LAZY this is not the case. Switching to PREEMPT_LAZY leads
to a kernel image size increase of ~218kb (defconfig, gcc15).
s390 provides optimized preempt primitives, however there is still room for
improvement. Since support for relocatable lowcore was added access to
preempt_count in lowcore requires an extra call of get_lowcore(), which
generates an extra instruction. Also all instructions have to use a base
register which is not zero to access preempt_count.
Address this by adding a couple of inline assemblies with alternatives.
This generates better code and reduces the size of a PREEMPT_LAZY built
kernel image by ~58kb.
[1] https://lore.kernel.org/all/20251219101502.GB1132199@noisy.programming.kicks-ass.net/
====================
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Diffstat (limited to 'Documentation/features/debug/stackprotector')
0 files changed, 0 insertions, 0 deletions
