<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/rcu/tree.c, branch v3.14</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v3.14</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v3.14'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2014-01-20T18:25:12Z</updated>
<entry>
<title>Merge branch 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip</title>
<updated>2014-01-20T18:25:12Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2014-01-20T18:25:12Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=a693c46e14c9fdadbcd68ddfa94a4f72495531a9'/>
<id>urn:sha1:a693c46e14c9fdadbcd68ddfa94a4f72495531a9</id>
<content type='text'>
Pull RCU updates from Ingo Molnar:
 - add RCU torture scripts/tooling
 - static analysis improvements
 - update RCU documentation
 - miscellaneous fixes

* 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (52 commits)
  rcu: Remove "extern" from function declarations in kernel/rcu/rcu.h
  rcu: Remove "extern" from function declarations in include/linux/*rcu*.h
  rcu/torture: Dynamically allocate SRCU output buffer to avoid overflow
  rcu: Don't activate RCU core on NO_HZ_FULL CPUs
  rcu: Warn on allegedly impossible rcu_read_unlock_special() from irq
  rcu: Add an RCU_INITIALIZER for global RCU-protected pointers
  rcu: Make rcu_assign_pointer's assignment volatile and type-safe
  bonding: Use RCU_INIT_POINTER() for better overhead and for sparse
  rcu: Add comment on evaluate-once properties of rcu_assign_pointer().
  rcu: Provide better diagnostics for blocking in RCU callback functions
  rcu: Improve SRCU's grace-period comments
  rcu: Fix CONFIG_RCU_FANOUT_EXACT for odd fanout/leaf values
  rcu: Fix coccinelle warnings
  rcutorture: Stop tracking FSF's postal address
  rcutorture: Move checkarg to functions.sh
  rcutorture: Flag errors and warnings with color coding
  rcutorture: Record results from repeated runs of the same test scenario
  rcutorture: Test summary at end of run with less chattiness
  rcutorture: Update comment in kvm.sh listing typical RCU trace events
  rcutorture: Add tracing-enabled version of TREE08
  ...
</content>
</entry>
<entry>
<title>rcu: Apply smp_mb__after_unlock_lock() to preserve grace periods</title>
<updated>2013-12-16T10:36:16Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2013-12-11T21:59:10Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=6303b9c87d52eaedc82968d3ff59c471e7682afc'/>
<id>urn:sha1:6303b9c87d52eaedc82968d3ff59c471e7682afc</id>
<content type='text'>
RCU must ensure that there is the equivalent of a full memory
barrier between any memory access preceding grace period and any
memory access following that same grace period, regardless of
which CPU(s) happen to execute the two memory accesses.
Therefore, downgrading UNLOCK+LOCK to no longer imply a full
memory barrier requires some adjustments to RCU.

This commit therefore adds smp_mb__after_unlock_lock()
invocations as needed after the RCU lock acquisitions that need
to be part of a full-memory-barrier UNLOCK+LOCK.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Reviewed-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Cc: &lt;linux-arch@vger.kernel.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Link: http://lkml.kernel.org/r/1386799151-2219-7-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>rcu: Don't activate RCU core on NO_HZ_FULL CPUs</title>
<updated>2013-12-12T20:34:15Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2013-11-08T17:03:10Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=a096932f0c9c9dca9cce72f1c0fb2395df8f2dff'/>
<id>urn:sha1:a096932f0c9c9dca9cce72f1c0fb2395df8f2dff</id>
<content type='text'>
Whenever a CPU receives a scheduling-clock interrupt, RCU checks to see
if the RCU core needs anything from this CPU.  If so, RCU raises
RCU_SOFTIRQ to carry out any needed processing.

This approach has worked well historically, but it is undesirable on
NO_HZ_FULL CPUs.  Such CPUs are expected to spend almost all of their time
in userspace, so that scheduling-clock interrupts can be disabled while
there is only one runnable task on the CPU in question.  Unfortunately,
raising any softirq has the potential to wake up ksoftirqd, which would
provide the second runnable task on that CPU, preventing disabling of
scheduling-clock interrupts.

What is needed instead is for RCU to leave NO_HZ_FULL CPUs alone,
relying on the grace-period kthreads' quiescent-state forcing to
do any needed RCU work on behalf of those CPUs.

This commit therefore refrains from raising RCU_SOFTIRQ on any
NO_HZ_FULL CPUs during any grace periods that have been in effect
for less than one second.  The one-second limit handles the case
where an inappropriate workload is running on a NO_HZ_FULL CPU
that features lots of scheduling-clock interrupts, but no idle
or userspace time.

Reported-by: Mike Galbraith &lt;bitbucket@online.de&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Tested-by: Mike Galbraith &lt;bitbucket@online.de&gt;
Toasted-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
</content>
</entry>
<entry>
<title>rcu: Fix CONFIG_RCU_FANOUT_EXACT for odd fanout/leaf values</title>
<updated>2013-12-09T23:12:38Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2013-10-16T15:39:10Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=04f34650ca5e8445aae0ab3e0ff6704f141150a8'/>
<id>urn:sha1:04f34650ca5e8445aae0ab3e0ff6704f141150a8</id>
<content type='text'>
Each element of the rcu_state structure's -&gt;levelspread[] array
is intended to contain the per-level fanout, where the zero-th
element corresponds to the root of the rcu_node tree, and the last
element corresponds to the leaves.  In the CONFIG_RCU_FANOUT_EXACT
case, this means that the last element should be filled in
from CONFIG_RCU_FANOUT_LEAF (or from the rcu_fanout_leaf boot
parameter, if provided) and that the remaining elements should
be filled in from CONFIG_RCU_FANOUT.  Unfortunately, the current
code in rcu_init_levelspread() takes the opposite approach, placing
CONFIG_RCU_FANOUT_LEAF in the zero-th element and CONFIG_RCU_FANOUT in
the remaining elements.

For typical power-of-two values, this generates odd but functional
rcu_node trees.  However, other values, for example CONFIG_RCU_FANOUT=3
and CONFIG_RCU_FANOUT_LEAF=2, generate trees that can leave some CPUs
out of the grace-period computation, resulting in too-short grace periods
and therefore a broken RCU implementation.

This commit therefore fixes rcu_init_levelspread() to set the last
-&gt;levelspread[] array element from CONFIG_RCU_FANOUT_LEAF and the
remaining elements from CONFIG_RCU_FANOUT, thus generating the
intended rcu_node trees.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu: Fix coccinelle warnings</title>
<updated>2013-12-09T23:12:25Z</updated>
<author>
<name>Fengguang Wu</name>
<email>fengguang.wu@intel.com</email>
</author>
<published>2013-10-10T18:08:33Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=f6f7ee9af7554e4d167ccd0ffe7cb8da0aa954f9'/>
<id>urn:sha1:f6f7ee9af7554e4d167ccd0ffe7cb8da0aa954f9</id>
<content type='text'>
This commit fixes the following coccinelle warning:

kernel/rcu/tree.c:712:9-10: WARNING: return of 0/1 in function
'rcu_lockdep_current_cpu_online' with return type bool

Return statements in functions returning bool should use
 true/false instead of 1/0.
 Generated by: coccinelle/misc/boolreturn.cocci

Signed-off-by: Fengguang Wu &lt;fengguang.wu@intel.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu: Let the world know when RCU adjusts its geometry</title>
<updated>2013-12-03T18:10:19Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2013-10-09T22:20:33Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=3947909814f38d524829bc41bd4c11068a15f0cd'/>
<id>urn:sha1:3947909814f38d524829bc41bd4c11068a15f0cd</id>
<content type='text'>
Some RCU bugs have been specific to the layout of the rcu_node tree,
but RCU will silently adjust the tree at boot time if appropriate.
This obscures valuable debugging information, so print a message when
this happens.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu: Allow task-level idle entry/exit nesting</title>
<updated>2013-12-03T18:10:19Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2013-10-05T01:48:55Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=3a5924052aec266f1035f2ff610b60b7d10dbe7f'/>
<id>urn:sha1:3a5924052aec266f1035f2ff610b60b7d10dbe7f</id>
<content type='text'>
The current task-level idle entry/exit code forces an entry/exit on
each call, regardless of the nesting level.  This commit therefore
properly accounts for nesting.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Reviewed-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
</content>
</entry>
<entry>
<title>rcu: Break call_rcu() deadlock involving scheduler and perf</title>
<updated>2013-12-03T18:10:18Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2013-10-04T21:33:34Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=96d3fd0d315a949e30adc80f086031c5cdf070d1'/>
<id>urn:sha1:96d3fd0d315a949e30adc80f086031c5cdf070d1</id>
<content type='text'>
Dave Jones got the following lockdep splat:

&gt;  ======================================================
&gt;  [ INFO: possible circular locking dependency detected ]
&gt;  3.12.0-rc3+ #92 Not tainted
&gt;  -------------------------------------------------------
&gt;  trinity-child2/15191 is trying to acquire lock:
&gt;   (&amp;rdp-&gt;nocb_wq){......}, at: [&lt;ffffffff8108ff43&gt;] __wake_up+0x23/0x50
&gt;
&gt; but task is already holding lock:
&gt;   (&amp;ctx-&gt;lock){-.-...}, at: [&lt;ffffffff81154c19&gt;] perf_event_exit_task+0x109/0x230
&gt;
&gt; which lock already depends on the new lock.
&gt;
&gt;
&gt; the existing dependency chain (in reverse order) is:
&gt;
&gt; -&gt; #3 (&amp;ctx-&gt;lock){-.-...}:
&gt;         [&lt;ffffffff810cc243&gt;] lock_acquire+0x93/0x200
&gt;         [&lt;ffffffff81733f90&gt;] _raw_spin_lock+0x40/0x80
&gt;         [&lt;ffffffff811500ff&gt;] __perf_event_task_sched_out+0x2df/0x5e0
&gt;         [&lt;ffffffff81091b83&gt;] perf_event_task_sched_out+0x93/0xa0
&gt;         [&lt;ffffffff81732052&gt;] __schedule+0x1d2/0xa20
&gt;         [&lt;ffffffff81732f30&gt;] preempt_schedule_irq+0x50/0xb0
&gt;         [&lt;ffffffff817352b6&gt;] retint_kernel+0x26/0x30
&gt;         [&lt;ffffffff813eed04&gt;] tty_flip_buffer_push+0x34/0x50
&gt;         [&lt;ffffffff813f0504&gt;] pty_write+0x54/0x60
&gt;         [&lt;ffffffff813e900d&gt;] n_tty_write+0x32d/0x4e0
&gt;         [&lt;ffffffff813e5838&gt;] tty_write+0x158/0x2d0
&gt;         [&lt;ffffffff811c4850&gt;] vfs_write+0xc0/0x1f0
&gt;         [&lt;ffffffff811c52cc&gt;] SyS_write+0x4c/0xa0
&gt;         [&lt;ffffffff8173d4e4&gt;] tracesys+0xdd/0xe2
&gt;
&gt; -&gt; #2 (&amp;rq-&gt;lock){-.-.-.}:
&gt;         [&lt;ffffffff810cc243&gt;] lock_acquire+0x93/0x200
&gt;         [&lt;ffffffff81733f90&gt;] _raw_spin_lock+0x40/0x80
&gt;         [&lt;ffffffff810980b2&gt;] wake_up_new_task+0xc2/0x2e0
&gt;         [&lt;ffffffff81054336&gt;] do_fork+0x126/0x460
&gt;         [&lt;ffffffff81054696&gt;] kernel_thread+0x26/0x30
&gt;         [&lt;ffffffff8171ff93&gt;] rest_init+0x23/0x140
&gt;         [&lt;ffffffff81ee1e4b&gt;] start_kernel+0x3f6/0x403
&gt;         [&lt;ffffffff81ee1571&gt;] x86_64_start_reservations+0x2a/0x2c
&gt;         [&lt;ffffffff81ee1664&gt;] x86_64_start_kernel+0xf1/0xf4
&gt;
&gt; -&gt; #1 (&amp;p-&gt;pi_lock){-.-.-.}:
&gt;         [&lt;ffffffff810cc243&gt;] lock_acquire+0x93/0x200
&gt;         [&lt;ffffffff8173419b&gt;] _raw_spin_lock_irqsave+0x4b/0x90
&gt;         [&lt;ffffffff810979d1&gt;] try_to_wake_up+0x31/0x350
&gt;         [&lt;ffffffff81097d62&gt;] default_wake_function+0x12/0x20
&gt;         [&lt;ffffffff81084af8&gt;] autoremove_wake_function+0x18/0x40
&gt;         [&lt;ffffffff8108ea38&gt;] __wake_up_common+0x58/0x90
&gt;         [&lt;ffffffff8108ff59&gt;] __wake_up+0x39/0x50
&gt;         [&lt;ffffffff8110d4f8&gt;] __call_rcu_nocb_enqueue+0xa8/0xc0
&gt;         [&lt;ffffffff81111450&gt;] __call_rcu+0x140/0x820
&gt;         [&lt;ffffffff81111b8d&gt;] call_rcu+0x1d/0x20
&gt;         [&lt;ffffffff81093697&gt;] cpu_attach_domain+0x287/0x360
&gt;         [&lt;ffffffff81099d7e&gt;] build_sched_domains+0xe5e/0x10a0
&gt;         [&lt;ffffffff81efa7fc&gt;] sched_init_smp+0x3b7/0x47a
&gt;         [&lt;ffffffff81ee1f4e&gt;] kernel_init_freeable+0xf6/0x202
&gt;         [&lt;ffffffff817200be&gt;] kernel_init+0xe/0x190
&gt;         [&lt;ffffffff8173d22c&gt;] ret_from_fork+0x7c/0xb0
&gt;
&gt; -&gt; #0 (&amp;rdp-&gt;nocb_wq){......}:
&gt;         [&lt;ffffffff810cb7ca&gt;] __lock_acquire+0x191a/0x1be0
&gt;         [&lt;ffffffff810cc243&gt;] lock_acquire+0x93/0x200
&gt;         [&lt;ffffffff8173419b&gt;] _raw_spin_lock_irqsave+0x4b/0x90
&gt;         [&lt;ffffffff8108ff43&gt;] __wake_up+0x23/0x50
&gt;         [&lt;ffffffff8110d4f8&gt;] __call_rcu_nocb_enqueue+0xa8/0xc0
&gt;         [&lt;ffffffff81111450&gt;] __call_rcu+0x140/0x820
&gt;         [&lt;ffffffff81111bb0&gt;] kfree_call_rcu+0x20/0x30
&gt;         [&lt;ffffffff81149abf&gt;] put_ctx+0x4f/0x70
&gt;         [&lt;ffffffff81154c3e&gt;] perf_event_exit_task+0x12e/0x230
&gt;         [&lt;ffffffff81056b8d&gt;] do_exit+0x30d/0xcc0
&gt;         [&lt;ffffffff8105893c&gt;] do_group_exit+0x4c/0xc0
&gt;         [&lt;ffffffff810589c4&gt;] SyS_exit_group+0x14/0x20
&gt;         [&lt;ffffffff8173d4e4&gt;] tracesys+0xdd/0xe2
&gt;
&gt; other info that might help us debug this:
&gt;
&gt; Chain exists of:
&gt;   &amp;rdp-&gt;nocb_wq --&gt; &amp;rq-&gt;lock --&gt; &amp;ctx-&gt;lock
&gt;
&gt;   Possible unsafe locking scenario:
&gt;
&gt;         CPU0                    CPU1
&gt;         ----                    ----
&gt;    lock(&amp;ctx-&gt;lock);
&gt;                                 lock(&amp;rq-&gt;lock);
&gt;                                 lock(&amp;ctx-&gt;lock);
&gt;    lock(&amp;rdp-&gt;nocb_wq);
&gt;
&gt;  *** DEADLOCK ***
&gt;
&gt; 1 lock held by trinity-child2/15191:
&gt;  #0:  (&amp;ctx-&gt;lock){-.-...}, at: [&lt;ffffffff81154c19&gt;] perf_event_exit_task+0x109/0x230
&gt;
&gt; stack backtrace:
&gt; CPU: 2 PID: 15191 Comm: trinity-child2 Not tainted 3.12.0-rc3+ #92
&gt;  ffffffff82565b70 ffff880070c2dbf8 ffffffff8172a363 ffffffff824edf40
&gt;  ffff880070c2dc38 ffffffff81726741 ffff880070c2dc90 ffff88022383b1c0
&gt;  ffff88022383aac0 0000000000000000 ffff88022383b188 ffff88022383b1c0
&gt; Call Trace:
&gt;  [&lt;ffffffff8172a363&gt;] dump_stack+0x4e/0x82
&gt;  [&lt;ffffffff81726741&gt;] print_circular_bug+0x200/0x20f
&gt;  [&lt;ffffffff810cb7ca&gt;] __lock_acquire+0x191a/0x1be0
&gt;  [&lt;ffffffff810c6439&gt;] ? get_lock_stats+0x19/0x60
&gt;  [&lt;ffffffff8100b2f4&gt;] ? native_sched_clock+0x24/0x80
&gt;  [&lt;ffffffff810cc243&gt;] lock_acquire+0x93/0x200
&gt;  [&lt;ffffffff8108ff43&gt;] ? __wake_up+0x23/0x50
&gt;  [&lt;ffffffff8173419b&gt;] _raw_spin_lock_irqsave+0x4b/0x90
&gt;  [&lt;ffffffff8108ff43&gt;] ? __wake_up+0x23/0x50
&gt;  [&lt;ffffffff8108ff43&gt;] __wake_up+0x23/0x50
&gt;  [&lt;ffffffff8110d4f8&gt;] __call_rcu_nocb_enqueue+0xa8/0xc0
&gt;  [&lt;ffffffff81111450&gt;] __call_rcu+0x140/0x820
&gt;  [&lt;ffffffff8109bc8f&gt;] ? local_clock+0x3f/0x50
&gt;  [&lt;ffffffff81111bb0&gt;] kfree_call_rcu+0x20/0x30
&gt;  [&lt;ffffffff81149abf&gt;] put_ctx+0x4f/0x70
&gt;  [&lt;ffffffff81154c3e&gt;] perf_event_exit_task+0x12e/0x230
&gt;  [&lt;ffffffff81056b8d&gt;] do_exit+0x30d/0xcc0
&gt;  [&lt;ffffffff810c9af5&gt;] ? trace_hardirqs_on_caller+0x115/0x1e0
&gt;  [&lt;ffffffff810c9bcd&gt;] ? trace_hardirqs_on+0xd/0x10
&gt;  [&lt;ffffffff8105893c&gt;] do_group_exit+0x4c/0xc0
&gt;  [&lt;ffffffff810589c4&gt;] SyS_exit_group+0x14/0x20
&gt;  [&lt;ffffffff8173d4e4&gt;] tracesys+0xdd/0xe2

The underlying problem is that perf is invoking call_rcu() with the
scheduler locks held, but in NOCB mode, call_rcu() will with high
probability invoke the scheduler -- which just might want to use its
locks.  The reason that call_rcu() needs to invoke the scheduler is
to wake up the corresponding rcuo callback-offload kthread, which
does the job of starting up a grace period and invoking the callbacks
afterwards.

One solution (championed on a related problem by Lai Jiangshan) is to
simply defer the wakeup to some point where scheduler locks are no longer
held.  Since we don't want to unnecessarily incur the cost of such
deferral, the task before us is threefold:

1.	Determine when it is likely that a relevant scheduler lock is held.

2.	Defer the wakeup in such cases.

3.	Ensure that all deferred wakeups eventually happen, preferably
	sooner rather than later.

We use irqs_disabled_flags() as a proxy for relevant scheduler locks
being held.  This works because the relevant locks are always acquired
with interrupts disabled.  We may defer more often than needed, but that
is at least safe.

The wakeup deferral is tracked via a new field in the per-CPU and
per-RCU-flavor rcu_data structure, namely -&gt;nocb_defer_wakeup.

This flag is checked by the RCU core processing.  The __rcu_pending()
function now checks this flag, which causes rcu_check_callbacks()
to initiate RCU core processing at each scheduling-clock interrupt
where this flag is set.  Of course this is not sufficient because
scheduling-clock interrupts are often turned off (the things we used to
be able to count on!).  So the flags are also checked on entry to any
state that RCU considers to be idle, which includes both NO_HZ_IDLE idle
state and NO_HZ_FULL user-mode-execution state.

This approach should allow call_rcu() to be invoked regardless of what
locks you might be holding, the key word being "should".

Reported-by: Dave Jones &lt;davej@redhat.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
</content>
</entry>
<entry>
<title>rcu: Fix and comment ordering around wait_event()</title>
<updated>2013-12-03T18:10:18Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2013-09-24T22:04:06Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=78e4bc34e5d966cfd95f1238565afc399d56225c'/>
<id>urn:sha1:78e4bc34e5d966cfd95f1238565afc399d56225c</id>
<content type='text'>
It is all too easy to forget that wait_event() does not necessarily
imply a full memory barrier.  The case where it does not is where the
condition transitions to true just as wait_event() starts execution.
This is actually a feature: The standard use of wait_event() involves
locking, in which case the locks provide the needed ordering (you hold a
lock across the wake_up() and acquire that same lock after wait_event()
returns).

Given that I did forget that wait_event() does not necessarily imply a
full memory barrier in one case, this commit fixes that case.  This commit
also adds comments calling out the placement of existing memory barriers
relied on by wait_event() calls.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu: Kick CPU halfway to RCU CPU stall warning</title>
<updated>2013-12-03T18:10:18Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2013-09-23T20:57:18Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=6193c76aba8ec3cc5f083c35efbab9ed924125f6'/>
<id>urn:sha1:6193c76aba8ec3cc5f083c35efbab9ed924125f6</id>
<content type='text'>
When an RCU CPU stall warning occurs, the CPU invokes resched_cpu() on
itself.  This can help move the grace period forward in some situations,
but it would be even better to do this -before- the RCU CPU stall warning.
This commit therefore causes resched_cpu() to be called every five jiffies
once the system is halfway to an RCU CPU stall warning.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
</entry>
</feed>
