<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/cpu.c, branch v4.20</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v4.20</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v4.20'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2018-11-28T10:57:07Z</updated>
<entry>
<title>x86/speculation: Rework SMT state change</title>
<updated>2018-11-28T10:57:07Z</updated>
<author>
<name>Thomas Gleixner</name>
<email>tglx@linutronix.de</email>
</author>
<published>2018-11-25T18:33:39Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=a74cfffb03b73d41e08f84c2e5c87dec0ce3db9f'/>
<id>urn:sha1:a74cfffb03b73d41e08f84c2e5c87dec0ce3db9f</id>
<content type='text'>
arch_smt_update() is only called when the sysfs SMT control knob is
changed. This means that when SMT is enabled in the sysfs control knob the
system is considered to have SMT active even if all siblings are offline.

To allow finegrained control of the speculation mitigations, the actual SMT
state is more interesting than the fact that siblings could be enabled.

Rework the code, so arch_smt_update() is invoked from each individual CPU
hotplug function, and simplify the update function while at it.

Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Reviewed-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Andy Lutomirski &lt;luto@kernel.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Jiri Kosina &lt;jkosina@suse.cz&gt;
Cc: Tom Lendacky &lt;thomas.lendacky@amd.com&gt;
Cc: Josh Poimboeuf &lt;jpoimboe@redhat.com&gt;
Cc: Andrea Arcangeli &lt;aarcange@redhat.com&gt;
Cc: David Woodhouse &lt;dwmw@amazon.co.uk&gt;
Cc: Tim Chen &lt;tim.c.chen@linux.intel.com&gt;
Cc: Andi Kleen &lt;ak@linux.intel.com&gt;
Cc: Dave Hansen &lt;dave.hansen@intel.com&gt;
Cc: Casey Schaufler &lt;casey.schaufler@intel.com&gt;
Cc: Asit Mallick &lt;asit.k.mallick@intel.com&gt;
Cc: Arjan van de Ven &lt;arjan@linux.intel.com&gt;
Cc: Jon Masters &lt;jcm@redhat.com&gt;
Cc: Waiman Long &lt;longman9394@gmail.com&gt;
Cc: Greg KH &lt;gregkh@linuxfoundation.org&gt;
Cc: Dave Stewart &lt;david.c.stewart@intel.com&gt;
Cc: Kees Cook &lt;keescook@chromium.org&gt;
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20181125185004.521974984@linutronix.de


</content>
</entry>
<entry>
<title>Merge branch 'x86-pti-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip</title>
<updated>2018-10-23T17:43:04Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2018-10-23T17:43:04Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=d82924c3b8d0607094b94fab290a33c5ad7d586c'/>
<id>urn:sha1:d82924c3b8d0607094b94fab290a33c5ad7d586c</id>
<content type='text'>
Pull x86 pti updates from Ingo Molnar:
 "The main changes:

   - Make the IBPB barrier more strict and add STIBP support (Jiri
     Kosina)

   - Micro-optimize and clean up the entry code (Andy Lutomirski)

   - ... plus misc other fixes"

* 'x86-pti-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/speculation: Propagate information about RSB filling mitigation to sysfs
  x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation
  x86/speculation: Apply IBPB more strictly to avoid cross-process data leak
  x86/speculation: Add RETPOLINE_AMD support to the inline asm CALL_NOSPEC variant
  x86/CPU: Fix unused variable warning when !CONFIG_IA32_EMULATION
  x86/pti/64: Remove the SYSCALL64 entry trampoline
  x86/entry/64: Use the TSS sp2 slot for SYSCALL/SYSRET scratch space
  x86/entry/64: Document idtentry
</content>
</entry>
<entry>
<title>Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip</title>
<updated>2018-10-23T14:00:03Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2018-10-23T14:00:03Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=42f52e1c59bdb78cad945b2dd34fa1f892239a39'/>
<id>urn:sha1:42f52e1c59bdb78cad945b2dd34fa1f892239a39</id>
<content type='text'>
Pull scheduler updates from Ingo Molnar:
 "The main changes are:

   - Migrate CPU-intense 'misfit' tasks on asymmetric capacity systems,
     to better utilize (much) faster 'big core' CPUs. (Morten Rasmussen,
     Valentin Schneider)

   - Topology handling improvements, in particular when CPU capacity
     changes and related load-balancing fixes/improvements (Morten
     Rasmussen)

   - ... plus misc other improvements, fixes and updates"

* 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (28 commits)
  sched/completions/Documentation: Add recommendation for dynamic and ONSTACK completions
  sched/completions/Documentation: Clean up the document some more
  sched/completions/Documentation: Fix a couple of punctuation nits
  cpu/SMT: State SMT is disabled even with nosmt and without "=force"
  sched/core: Fix comment regarding nr_iowait_cpu() and get_iowait_load()
  sched/fair: Remove setting task's se-&gt;runnable_weight during PELT update
  sched/fair: Disable LB_BIAS by default
  sched/pelt: Fix warning and clean up IRQ PELT config
  sched/topology: Make local variables static
  sched/debug: Use symbolic names for task state constants
  sched/numa: Remove unused numa_stats::nr_running field
  sched/numa: Remove unused code from update_numa_stats()
  sched/debug: Explicitly cast sched_feat() to bool
  sched/core: Disable SD_PREFER_SIBLING on asymmetric CPU capacity domains
  sched/fair: Don't move tasks to lower capacity CPUs unless necessary
  sched/fair: Set rq-&gt;rd-&gt;overload when misfit
  sched/fair: Wrap rq-&gt;rd-&gt;overload accesses with READ/WRITE_ONCE()
  sched/core: Change root_domain-&gt;overload type to int
  sched/fair: Change 'prefer_sibling' type to bool
  sched/fair: Kick nohz balance if rq-&gt;misfit_task_load
  ...
</content>
</entry>
<entry>
<title>cpu/SMT: State SMT is disabled even with nosmt and without "=force"</title>
<updated>2018-10-05T08:20:31Z</updated>
<author>
<name>Borislav Petkov</name>
<email>bp@suse.de</email>
</author>
<published>2018-10-04T17:22:27Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=d0e7d14455d41163126afecd0fcce935463cc512'/>
<id>urn:sha1:d0e7d14455d41163126afecd0fcce935463cc512</id>
<content type='text'>
When booting with "nosmt=force" a message is issued into dmesg to
confirm that SMT has been force-disabled but such a message is not
issued when only "nosmt" is on the kernel command line.

Fix that.

Signed-off-by: Borislav Petkov &lt;bp@suse.de&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Link: http://lkml.kernel.org/r/20181004172227.10094-1-bp@alien8.de
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation</title>
<updated>2018-09-26T12:26:52Z</updated>
<author>
<name>Jiri Kosina</name>
<email>jkosina@suse.cz</email>
</author>
<published>2018-09-25T12:38:55Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=53c613fe6349994f023245519265999eed75957f'/>
<id>urn:sha1:53c613fe6349994f023245519265999eed75957f</id>
<content type='text'>
STIBP is a feature provided by certain Intel ucodes / CPUs. This feature
(once enabled) prevents cross-hyperthread control of decisions made by
indirect branch predictors.

Enable this feature if

- the CPU is vulnerable to spectre v2
- the CPU supports SMT and has SMT siblings online
- spectre_v2 mitigation autoselection is enabled (default)

After some previous discussion, this leaves STIBP on all the time, as wrmsr
on crossing kernel boundary is a no-no. This could perhaps later be a bit
more optimized (like disabling it in NOHZ, experiment with disabling it in
idle, etc) if needed.

Note that the synchronization of the mask manipulation via newly added
spec_ctrl_mutex is currently not strictly needed, as the only updater is
already being serialized by cpu_add_remove_lock, but let's make this a
little bit more future-proof.

Signed-off-by: Jiri Kosina &lt;jkosina@suse.cz&gt;
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Josh Poimboeuf &lt;jpoimboe@redhat.com&gt;
Cc: Andrea Arcangeli &lt;aarcange@redhat.com&gt;
Cc:  "WoodhouseDavid" &lt;dwmw@amazon.co.uk&gt;
Cc: Andi Kleen &lt;ak@linux.intel.com&gt;
Cc: Tim Chen &lt;tim.c.chen@linux.intel.com&gt;
Cc:  "SchauflerCasey" &lt;casey.schaufler@intel.com&gt;
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/nycvar.YFH.7.76.1809251438240.15880@cbobk.fhfr.pm

</content>
</entry>
<entry>
<title>locking/lockdep, cpu/hotplug: Annotate AP thread</title>
<updated>2018-09-11T18:01:03Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2018-09-11T09:51:27Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=cb92173d1f0474784c6171a9d3fdbbca0ee53554'/>
<id>urn:sha1:cb92173d1f0474784c6171a9d3fdbbca0ee53554</id>
<content type='text'>
Anybody trying to assert the cpu_hotplug_lock is held (lockdep_assert_cpus_held())
from AP callbacks will fail, because the lock is held by the BP.

Stick in an explicit annotation in cpuhp_thread_fun() to make this work.

Reported-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: linux-tip-commits@vger.kernel.org
Fixes: cb538267ea1e ("jump_label/lockdep: Assert we hold the hotplug lock for _cpuslocked() operations")
Link: http://lkml.kernel.org/r/20180911095127.GT24082@hirez.programming.kicks-ass.net
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>cpu/hotplug: Prevent state corruption on error rollback</title>
<updated>2018-09-06T13:21:38Z</updated>
<author>
<name>Thomas Gleixner</name>
<email>tglx@linutronix.de</email>
</author>
<published>2018-09-06T13:21:38Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=69fa6eb7d6a64801ea261025cce9723d9442d773'/>
<id>urn:sha1:69fa6eb7d6a64801ea261025cce9723d9442d773</id>
<content type='text'>
When a teardown callback fails, the CPU hotplug code brings the CPU back to
the previous state. The previous state becomes the new target state. The
rollback happens in undo_cpu_down() which increments the state
unconditionally even if the state is already the same as the target.

As a consequence the next CPU hotplug operation will start at the wrong
state. This is easily to observe when __cpu_disable() fails.

Prevent the unconditional undo by checking the state vs. target before
incrementing state and fix up the consequently wrong conditional in the
unplug code which handles the failure of the final CPU take down on the
control CPU side.

Fixes: 4dddfb5faa61 ("smp/hotplug: Rewrite AP state machine core")
Reported-by: Neeraj Upadhyay &lt;neeraju@codeaurora.org&gt;
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Tested-by: Geert Uytterhoeven &lt;geert+renesas@glider.be&gt;
Tested-by: Sudeep Holla &lt;sudeep.holla@arm.com&gt;
Tested-by: Neeraj Upadhyay &lt;neeraju@codeaurora.org&gt;
Cc: josh@joshtriplett.org
Cc: peterz@infradead.org
Cc: jiangshanlai@gmail.com
Cc: dzickus@redhat.com
Cc: brendan.jackman@arm.com
Cc: malat@debian.org
Cc: sramana@codeaurora.org
Cc: linux-arm-msm@vger.kernel.org
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1809051419580.1416@nanos.tec.linutronix.de

----
</content>
</entry>
<entry>
<title>cpu/hotplug: Adjust misplaced smb() in cpuhp_thread_fun()</title>
<updated>2018-09-06T13:21:37Z</updated>
<author>
<name>Neeraj Upadhyay</name>
<email>neeraju@codeaurora.org</email>
</author>
<published>2018-09-05T05:52:07Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=f8b7530aa0a1def79c93101216b5b17cf408a70a'/>
<id>urn:sha1:f8b7530aa0a1def79c93101216b5b17cf408a70a</id>
<content type='text'>
The smp_mb() in cpuhp_thread_fun() is misplaced. It needs to be after the
load of st-&gt;should_run to prevent reordering of the later load/stores
w.r.t. the load of st-&gt;should_run.

Fixes: 4dddfb5faa61 ("smp/hotplug: Rewrite AP state machine core")
Signed-off-by: Neeraj Upadhyay &lt;neeraju@codeaurora.org&gt;
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Acked-by: Peter Zijlstra (Intel) &lt;peterz@infraded.org&gt;
Cc: josh@joshtriplett.org
Cc: peterz@infradead.org
Cc: jiangshanlai@gmail.com
Cc: dzickus@redhat.com
Cc: brendan.jackman@arm.com
Cc: malat@debian.org
Cc: mojha@codeaurora.org
Cc: sramana@codeaurora.org
Cc: linux-arm-msm@vger.kernel.org
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/1536126727-11629-1-git-send-email-neeraju@codeaurora.org

</content>
</entry>
<entry>
<title>cpu/hotplug: Remove skip_onerr field from cpuhp_step structure</title>
<updated>2018-08-31T12:13:03Z</updated>
<author>
<name>Mukesh Ojha</name>
<email>mojha@codeaurora.org</email>
</author>
<published>2018-08-28T06:54:54Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=6fb86d97207880c1286cd4cb3a7e6a598afbc727'/>
<id>urn:sha1:6fb86d97207880c1286cd4cb3a7e6a598afbc727</id>
<content type='text'>
When notifiers were there, `skip_onerr` was used to avoid calling
particular step startup/teardown callbacks in the CPU up/down rollback
path, which made the hotplug asymmetric.

As notifiers are gone now after the full state machine conversion, the
`skip_onerr` field is no longer required.

Remove it from the structure and its usage.

Signed-off-by: Mukesh Ojha &lt;mojha@codeaurora.org&gt;
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Acked-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://lkml.kernel.org/r/1535439294-31426-1-git-send-email-mojha@codeaurora.org
</content>
</entry>
<entry>
<title>cpu/hotplug: Non-SMP machines do not make use of booted_once</title>
<updated>2018-08-14T22:00:00Z</updated>
<author>
<name>Abel Vesa</name>
<email>abelvesa@linux.com</email>
</author>
<published>2018-08-14T21:26:00Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=269777aa530f3438ec1781586cdac0b5fe47b061'/>
<id>urn:sha1:269777aa530f3438ec1781586cdac0b5fe47b061</id>
<content type='text'>
Commit 0cc3cd21657b ("cpu/hotplug: Boot HT siblings at least once")
breaks non-SMP builds.

[ I suspect the 'bool' fields should just be made to be bitfields and be
  exposed regardless of configuration, but that's a separate cleanup
  that I'll leave to the owners of this file for later.   - Linus ]

Fixes: 0cc3cd21657b ("cpu/hotplug: Boot HT siblings at least once")
Cc: Dave Hansen &lt;dave.hansen@intel.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Tony Luck &lt;tony.luck@intel.com&gt;
Signed-off-by: Abel Vesa &lt;abelvesa@linux.com&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
</feed>
