<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/rcu/tree_plugin.h, branch v4.6</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v4.6</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v4.6'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2016-03-15T08:01:06Z</updated>
<entry>
<title>Merge commit 'fixes.2015.02.23a' into core/rcu</title>
<updated>2016-03-15T08:01:06Z</updated>
<author>
<name>Ingo Molnar</name>
<email>mingo@kernel.org</email>
</author>
<published>2016-03-15T08:00:12Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=8bc6782fe20bd2584c73a35c47329c9fd0a8d34c'/>
<id>urn:sha1:8bc6782fe20bd2584c73a35c47329c9fd0a8d34c</id>
<content type='text'>
 Conflicts:
	kernel/rcu/tree.c

Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>rcu: Use simple wait queues where possible in rcutree</title>
<updated>2016-02-25T10:27:16Z</updated>
<author>
<name>Paul Gortmaker</name>
<email>paul.gortmaker@windriver.com</email>
</author>
<published>2016-02-19T08:46:41Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=abedf8e2419fb873d919dd74de2e84b510259339'/>
<id>urn:sha1:abedf8e2419fb873d919dd74de2e84b510259339</id>
<content type='text'>
As of commit dae6e64d2bcfd ("rcu: Introduce proper blocking to no-CBs kthreads
GP waits") the RCU subsystem started making use of wait queues.

Here we convert all additions of RCU wait queues to use simple wait queues,
since they don't need the extra overhead of the full wait queue features.

Originally this was done for RT kernels[1], since we would get things like...

  BUG: sleeping function called from invalid context at kernel/rtmutex.c:659
  in_atomic(): 1, irqs_disabled(): 1, pid: 8, name: rcu_preempt
  Pid: 8, comm: rcu_preempt Not tainted
  Call Trace:
   [&lt;ffffffff8106c8d0&gt;] __might_sleep+0xd0/0xf0
   [&lt;ffffffff817d77b4&gt;] rt_spin_lock+0x24/0x50
   [&lt;ffffffff8106fcf6&gt;] __wake_up+0x36/0x70
   [&lt;ffffffff810c4542&gt;] rcu_gp_kthread+0x4d2/0x680
   [&lt;ffffffff8105f910&gt;] ? __init_waitqueue_head+0x50/0x50
   [&lt;ffffffff810c4070&gt;] ? rcu_gp_fqs+0x80/0x80
   [&lt;ffffffff8105eabb&gt;] kthread+0xdb/0xe0
   [&lt;ffffffff8106b912&gt;] ? finish_task_switch+0x52/0x100
   [&lt;ffffffff817e0754&gt;] kernel_thread_helper+0x4/0x10
   [&lt;ffffffff8105e9e0&gt;] ? __init_kthread_worker+0x60/0x60
   [&lt;ffffffff817e0750&gt;] ? gs_change+0xb/0xb

...and hence simple wait queues were deployed on RT out of necessity
(as simple wait uses a raw lock), but mainline might as well take
advantage of the more streamline support as well.

[1] This is a carry forward of work from v3.10-rt; the original conversion
was by Thomas on an earlier -rt version, and Sebastian extended it to
additional post-3.10 added RCU waiters; here I've added a commit log and
unified the RCU changes into one, and uprev'd it to match mainline RCU.

Signed-off-by: Daniel Wagner &lt;daniel.wagner@bmw-carit.de&gt;
Acked-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: linux-rt-users@vger.kernel.org
Cc: Boqun Feng &lt;boqun.feng@gmail.com&gt;
Cc: Marcelo Tosatti &lt;mtosatti@redhat.com&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Paul Gortmaker &lt;paul.gortmaker@windriver.com&gt;
Cc: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Cc: "Paul E. McKenney" &lt;paulmck@linux.vnet.ibm.com&gt;
Link: http://lkml.kernel.org/r/1455871601-27484-6-git-send-email-wagi@monom.org
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
</content>
</entry>
<entry>
<title>rcu: Do not call rcu_nocb_gp_cleanup() while holding rnp-&gt;lock</title>
<updated>2016-02-25T10:27:16Z</updated>
<author>
<name>Daniel Wagner</name>
<email>daniel.wagner@bmw-carit.de</email>
</author>
<published>2016-02-19T08:46:40Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=065bb78c5b09df54d1c32e03227deb777ddff57b'/>
<id>urn:sha1:065bb78c5b09df54d1c32e03227deb777ddff57b</id>
<content type='text'>
rcu_nocb_gp_cleanup() is called while holding rnp-&gt;lock. Currently,
this is okay because the wake_up_all() in rcu_nocb_gp_cleanup() will
not enable the IRQs. lockdep is happy.

By switching over using swait this is not true anymore. swake_up_all()
enables the IRQs while processing the waiters. __do_softirq() can now
run and will eventually call rcu_process_callbacks() which wants to
grap nrp-&gt;lock.

Let's move the rcu_nocb_gp_cleanup() call outside the lock before we
switch over to swait.

If we would hold the rnp-&gt;lock and use swait, lockdep reports
following:

 =================================
 [ INFO: inconsistent lock state ]
 4.2.0-rc5-00025-g9a73ba0 #136 Not tainted
 ---------------------------------
 inconsistent {IN-SOFTIRQ-W} -&gt; {SOFTIRQ-ON-W} usage.
 rcu_preempt/8 [HC0[0]:SC0[0]:HE1:SE1] takes:
  (rcu_node_1){+.?...}, at: [&lt;ffffffff811387c7&gt;] rcu_gp_kthread+0xb97/0xeb0
 {IN-SOFTIRQ-W} state was registered at:
   [&lt;ffffffff81109b9f&gt;] __lock_acquire+0xd5f/0x21e0
   [&lt;ffffffff8110be0f&gt;] lock_acquire+0xdf/0x2b0
   [&lt;ffffffff81841cc9&gt;] _raw_spin_lock_irqsave+0x59/0xa0
   [&lt;ffffffff81136991&gt;] rcu_process_callbacks+0x141/0x3c0
   [&lt;ffffffff810b1a9d&gt;] __do_softirq+0x14d/0x670
   [&lt;ffffffff810b2214&gt;] irq_exit+0x104/0x110
   [&lt;ffffffff81844e96&gt;] smp_apic_timer_interrupt+0x46/0x60
   [&lt;ffffffff81842e70&gt;] apic_timer_interrupt+0x70/0x80
   [&lt;ffffffff810dba66&gt;] rq_attach_root+0xa6/0x100
   [&lt;ffffffff810dbc2d&gt;] cpu_attach_domain+0x16d/0x650
   [&lt;ffffffff810e4b42&gt;] build_sched_domains+0x942/0xb00
   [&lt;ffffffff821777c2&gt;] sched_init_smp+0x509/0x5c1
   [&lt;ffffffff821551e3&gt;] kernel_init_freeable+0x172/0x28f
   [&lt;ffffffff8182cdce&gt;] kernel_init+0xe/0xe0
   [&lt;ffffffff8184231f&gt;] ret_from_fork+0x3f/0x70
 irq event stamp: 76
 hardirqs last  enabled at (75): [&lt;ffffffff81841330&gt;] _raw_spin_unlock_irq+0x30/0x60
 hardirqs last disabled at (76): [&lt;ffffffff8184116f&gt;] _raw_spin_lock_irq+0x1f/0x90
 softirqs last  enabled at (0): [&lt;ffffffff810a8df2&gt;] copy_process.part.26+0x602/0x1cf0
 softirqs last disabled at (0): [&lt;          (null)&gt;]           (null)
 other info that might help us debug this:
  Possible unsafe locking scenario:
        CPU0
        ----
   lock(rcu_node_1);
   &lt;Interrupt&gt;
     lock(rcu_node_1);
  *** DEADLOCK ***
 1 lock held by rcu_preempt/8:
  #0:  (rcu_node_1){+.?...}, at: [&lt;ffffffff811387c7&gt;] rcu_gp_kthread+0xb97/0xeb0
 stack backtrace:
 CPU: 0 PID: 8 Comm: rcu_preempt Not tainted 4.2.0-rc5-00025-g9a73ba0 #136
 Hardware name: Dell Inc. PowerEdge R820/066N7P, BIOS 2.0.20 01/16/2014
  0000000000000000 000000006d7e67d8 ffff881fb081fbd8 ffffffff818379e0
  0000000000000000 ffff881fb0812a00 ffff881fb081fc38 ffffffff8110813b
  0000000000000000 0000000000000001 ffff881f00000001 ffffffff8102fa4f
 Call Trace:
  [&lt;ffffffff818379e0&gt;] dump_stack+0x4f/0x7b
  [&lt;ffffffff8110813b&gt;] print_usage_bug+0x1db/0x1e0
  [&lt;ffffffff8102fa4f&gt;] ? save_stack_trace+0x2f/0x50
  [&lt;ffffffff811087ad&gt;] mark_lock+0x66d/0x6e0
  [&lt;ffffffff81107790&gt;] ? check_usage_forwards+0x150/0x150
  [&lt;ffffffff81108898&gt;] mark_held_locks+0x78/0xa0
  [&lt;ffffffff81841330&gt;] ? _raw_spin_unlock_irq+0x30/0x60
  [&lt;ffffffff81108a28&gt;] trace_hardirqs_on_caller+0x168/0x220
  [&lt;ffffffff81108aed&gt;] trace_hardirqs_on+0xd/0x10
  [&lt;ffffffff81841330&gt;] _raw_spin_unlock_irq+0x30/0x60
  [&lt;ffffffff810fd1c7&gt;] swake_up_all+0xb7/0xe0
  [&lt;ffffffff811386e1&gt;] rcu_gp_kthread+0xab1/0xeb0
  [&lt;ffffffff811089bf&gt;] ? trace_hardirqs_on_caller+0xff/0x220
  [&lt;ffffffff81841341&gt;] ? _raw_spin_unlock_irq+0x41/0x60
  [&lt;ffffffff81137c30&gt;] ? rcu_barrier+0x20/0x20
  [&lt;ffffffff810d2014&gt;] kthread+0x104/0x120
  [&lt;ffffffff81841330&gt;] ? _raw_spin_unlock_irq+0x30/0x60
  [&lt;ffffffff810d1f10&gt;] ? kthread_create_on_node+0x260/0x260
  [&lt;ffffffff8184231f&gt;] ret_from_fork+0x3f/0x70
  [&lt;ffffffff810d1f10&gt;] ? kthread_create_on_node+0x260/0x260

Signed-off-by: Daniel Wagner &lt;daniel.wagner@bmw-carit.de&gt;
Acked-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: linux-rt-users@vger.kernel.org
Cc: Boqun Feng &lt;boqun.feng@gmail.com&gt;
Cc: Marcelo Tosatti &lt;mtosatti@redhat.com&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Paul Gortmaker &lt;paul.gortmaker@windriver.com&gt;
Cc: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Cc: "Paul E. McKenney" &lt;paulmck@linux.vnet.ibm.com&gt;
Link: http://lkml.kernel.org/r/1455871601-27484-5-git-send-email-wagi@monom.org
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
</content>
</entry>
<entry>
<title>RCU: Privatize rcu_node::lock</title>
<updated>2016-02-24T03:59:54Z</updated>
<author>
<name>Boqun Feng</name>
<email>boqun.feng@gmail.com</email>
</author>
<published>2015-12-29T04:18:47Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=67c583a7de3433a971983490b37ad2bff3c55463'/>
<id>urn:sha1:67c583a7de3433a971983490b37ad2bff3c55463</id>
<content type='text'>
In patch:

"rcu: Add transitivity to remaining rcu_node -&gt;lock acquisitions"

All locking operations on rcu_node::lock are replaced with the wrappers
because of the need of transitivity, which indicates we should never
write code using LOCK primitives alone(i.e. without a proper barrier
following) on rcu_node::lock outside those wrappers. We could detect
this kind of misuses on rcu_node::lock in the future by adding __private
modifier on rcu_node::lock.

To privatize rcu_node::lock, unlock wrappers are also needed. Replacing
spinlock unlocks with these wrappers not only privatizes rcu_node::lock
but also makes it easier to figure out critical sections of rcu_node.

This patch adds __private modifier to rcu_node::lock and makes every
access to it wrapped by ACCESS_PRIVATE(). Besides, unlock wrappers are
added and raw_spin_unlock(&amp;rnp-&gt;lock) and its friends are replaced with
those wrappers.

Signed-off-by: Boqun Feng &lt;boqun.feng@gmail.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu: Remove useless rcu_data_p when !PREEMPT_RCU</title>
<updated>2016-02-24T03:59:53Z</updated>
<author>
<name>Chen Gang</name>
<email>chengang@emindsoft.com.cn</email>
</author>
<published>2015-12-26T13:41:44Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=1914aab54319ed70608078df4f18ac4767753977'/>
<id>urn:sha1:1914aab54319ed70608078df4f18ac4767753977</id>
<content type='text'>
The related warning from gcc 6.0:

  In file included from kernel/rcu/tree.c:4630:0:
  kernel/rcu/tree_plugin.h:810:40: warning: ‘rcu_data_p’ defined but not used [-Wunused-const-variable]
   static struct rcu_data __percpu *const rcu_data_p = &amp;rcu_sched_data;
                                          ^~~~~~~~~~

Also remove always redundant rcu_data_p in tree.c.

Signed-off-by: Chen Gang &lt;gang.chen.5i5j@gmail.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
</entry>
<entry>
<title>Merge branches 'doc.2015.12.05a', 'exp.2015.12.07a', 'fixes.2015.12.07a', 'list.2015.12.04b' and 'torture.2015.12.05a' into HEAD</title>
<updated>2015-12-08T01:02:54Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2015-12-08T01:02:54Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=648c630c642a4869c7fc30345880675020298852'/>
<id>urn:sha1:648c630c642a4869c7fc30345880675020298852</id>
<content type='text'>
doc.2015.12.05a:  Documentation updates
exp.2015.12.07a:  Expedited grace-period updates
fixes.2015.12.07a:  Miscellaneous fixes
list.2015.12.04b:  Linked-list updates
torture.2015.12.05a:  Torture-test updates
</content>
</entry>
<entry>
<title>rcu: Eliminate unused rcu_init_one() argument</title>
<updated>2015-12-08T01:01:19Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2015-10-20T19:38:49Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=a87f203e2731ab477386c678e59033ee103018c0'/>
<id>urn:sha1:a87f203e2731ab477386c678e59033ee103018c0</id>
<content type='text'>
Now that the rcu_state structure's -&gt;rda field is compile-time initialized,
there is no need to pass the per-CPU rcu_data structure into rcu_init_one().
This commit therefore eliminates this now-unused parameter.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu: Stop disabling interrupts in scheduler fastpaths</title>
<updated>2015-12-04T20:27:31Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2015-10-07T16:10:48Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=46a5d164db53ba6066b11889abb7fa6bddbe5cf7'/>
<id>urn:sha1:46a5d164db53ba6066b11889abb7fa6bddbe5cf7</id>
<content type='text'>
We need the scheduler's fastpaths to be, well, fast, and unnecessarily
disabling and re-enabling interrupts is not necessarily consistent with
this goal.  Especially given that there are regions of the scheduler that
already have interrupts disabled.

This commit therefore moves the call to rcu_note_context_switch()
to one of the interrupts-disabled regions of the scheduler, and
removes the now-redundant disabling and re-enabling of interrupts from
rcu_note_context_switch() and the functions it calls.

Reported-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
[ paulmck: Shift rcu_note_context_switch() to avoid deadlock, as suggested
  by Peter Zijlstra. ]
</content>
</entry>
<entry>
<title>rcu: Avoid tick_nohz_active checks on NOCBs CPUs</title>
<updated>2015-12-04T20:27:31Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2015-09-29T15:59:32Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=f0f2e7d307fff226e0c1df5a07101a1216a46d8a'/>
<id>urn:sha1:f0f2e7d307fff226e0c1df5a07101a1216a46d8a</id>
<content type='text'>
Currently, rcu_prepare_for_idle() checks for tick_nohz_active, even on
individual NOCBs CPUs, unless all CPUs are marked as NOCBs CPUs at build
time.  This check is pointless on NOCBs CPUs because they never have any
callbacks posted, given that all of their callbacks are handed off to the
corresponding rcuo kthread.  There is a check for individually designated
NOCBs CPUs, but it pointelessly follows the check for tick_nohz_active.

This commit therefore moves the check for individually designated NOCBs
CPUs up with the check for CONFIG_RCU_NOCB_CPU_ALL.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
</entry>
<entry>
<title>rcu: Fix obsolete rcu_bootup_announce_oddness() comment</title>
<updated>2015-12-04T20:27:30Z</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2015-09-29T15:47:49Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=699d40352059e64a4d993af170272585c41988d0'/>
<id>urn:sha1:699d40352059e64a4d993af170272585c41988d0</id>
<content type='text'>
This function no longer has #ifdefs, so this commit removes the
header comment calling them out.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
</entry>
</feed>
