<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/locking, branch v4.8</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v4.8</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v4.8'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2016-08-10T12:13:29Z</updated>
<entry>
<title>locking/pvqspinlock: Fix a bug in qstat_read()</title>
<updated>2016-08-10T12:13:29Z</updated>
<author>
<name>Pan Xinhui</name>
<email>xinhui.pan@linux.vnet.ibm.com</email>
</author>
<published>2016-07-13T10:23:34Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=c2ace36b884de9330c4149064ae8d212d2e0d9ee'/>
<id>urn:sha1:c2ace36b884de9330c4149064ae8d212d2e0d9ee</id>
<content type='text'>
It's obviously wrong to set stat to NULL. So lets remove it.
Otherwise it is always zero when we check the latency of kick/wake.

Signed-off-by: Pan Xinhui &lt;xinhui.pan@linux.vnet.ibm.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Reviewed-by: Waiman Long &lt;Waiman.Long@hpe.com&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Link: http://lkml.kernel.org/r/1468405414-3700-1-git-send-email-xinhui.pan@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>locking/pvqspinlock: Fix double hash race</title>
<updated>2016-08-10T12:13:28Z</updated>
<author>
<name>Wanpeng Li</name>
<email>wanpeng.li@hotmail.com</email>
</author>
<published>2016-07-14T08:15:56Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=229ce631574761870a2ac938845fadbd07f35caa'/>
<id>urn:sha1:229ce631574761870a2ac938845fadbd07f35caa</id>
<content type='text'>
When the lock holder vCPU is racing with the queue head:

   CPU 0 (lock holder)    CPU1 (queue head)
   ===================    =================
   spin_lock();           spin_lock();
    pv_kick_node():        pv_wait_head_or_lock():
                            if (!lp) {
                             lp = pv_hash(lock, pn);
                             xchg(&amp;l-&gt;locked, _Q_SLOW_VAL);
                            }
                            WRITE_ONCE(pn-&gt;state, vcpu_halted);
     cmpxchg(&amp;pn-&gt;state,
      vcpu_halted, vcpu_hashed);
     WRITE_ONCE(l-&gt;locked, _Q_SLOW_VAL);
     (void)pv_hash(lock, pn);

In this case, lock holder inserts the pv_node of queue head into the
hash table and set _Q_SLOW_VAL unnecessary. This patch avoids it by
restoring/setting vcpu_hashed state after failing adaptive locking
spinning.

Signed-off-by: Wanpeng Li &lt;wanpeng.li@hotmail.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Reviewed-by: Pan Xinhui &lt;xinhui.pan@linux.vnet.ibm.com&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Davidlohr Bueso &lt;dave@stgolabs.net&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Waiman Long &lt;Waiman.Long@hpe.com&gt;
Link: http://lkml.kernel.org/r/1468484156-4521-1-git-send-email-wanpeng.li@hotmail.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip</title>
<updated>2016-07-25T19:41:29Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2016-07-25T19:41:29Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=c86ad14d305d2429c3da19462440bac50c183def'/>
<id>urn:sha1:c86ad14d305d2429c3da19462440bac50c183def</id>
<content type='text'>
Pull locking updates from Ingo Molnar:
 "The locking tree was busier in this cycle than the usual pattern - a
  couple of major projects happened to coincide.

  The main changes are:

   - implement the atomic_fetch_{add,sub,and,or,xor}() API natively
     across all SMP architectures (Peter Zijlstra)

   - add atomic_fetch_{inc/dec}() as well, using the generic primitives
     (Davidlohr Bueso)

   - optimize various aspects of rwsems (Jason Low, Davidlohr Bueso,
     Waiman Long)

   - optimize smp_cond_load_acquire() on arm64 and implement LSE based
     atomic{,64}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}()
     on arm64 (Will Deacon)

   - introduce smp_acquire__after_ctrl_dep() and fix various barrier
     mis-uses and bugs (Peter Zijlstra)

   - after discovering ancient spin_unlock_wait() barrier bugs in its
     implementation and usage, strengthen its semantics and update/fix
     usage sites (Peter Zijlstra)

   - optimize mutex_trylock() fastpath (Peter Zijlstra)

   - ... misc fixes and cleanups"

* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (67 commits)
  locking/atomic: Introduce inc/dec variants for the atomic_fetch_$op() API
  locking/barriers, arch/arm64: Implement LDXR+WFE based smp_cond_load_acquire()
  locking/static_keys: Fix non static symbol Sparse warning
  locking/qspinlock: Use __this_cpu_dec() instead of full-blown this_cpu_dec()
  locking/atomic, arch/tile: Fix tilepro build
  locking/atomic, arch/m68k: Remove comment
  locking/atomic, arch/arc: Fix build
  locking/Documentation: Clarify limited control-dependency scope
  locking/atomic, arch/rwsem: Employ atomic_long_fetch_add()
  locking/atomic, arch/qrwlock: Employ atomic_fetch_add_acquire()
  locking/atomic, arch/mips: Convert to _relaxed atomics
  locking/atomic, arch/alpha: Convert to _relaxed atomics
  locking/atomic: Remove the deprecated atomic_{set,clear}_mask() functions
  locking/atomic: Remove linux/atomic.h:atomic_fetch_or()
  locking/atomic: Implement atomic{,64,_long}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}()
  locking/atomic: Fix atomic64_relaxed() bits
  locking/atomic, arch/xtensa: Implement atomic_fetch_{add,sub,and,or,xor}()
  locking/atomic, arch/x86: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()
  locking/atomic, arch/tile: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()
  locking/atomic, arch/sparc: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()
  ...
</content>
</entry>
<entry>
<title>Merge branch 'locking/arch-atomic' into locking/core, because the topic is ready</title>
<updated>2016-07-07T07:12:02Z</updated>
<author>
<name>Ingo Molnar</name>
<email>mingo@kernel.org</email>
</author>
<published>2016-07-07T07:12:02Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=36e91aa2628e46c2146049eee8b9b7f773b0ffc3'/>
<id>urn:sha1:36e91aa2628e46c2146049eee8b9b7f773b0ffc3</id>
<content type='text'>
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>locking/qspinlock: Use __this_cpu_dec() instead of full-blown this_cpu_dec()</title>
<updated>2016-06-27T09:37:41Z</updated>
<author>
<name>Pan Xinhui</name>
<email>xinhui.pan@linux.vnet.ibm.com</email>
</author>
<published>2016-06-14T06:37:27Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=0dceeaf599e6d9b8bd908ba4bd3dfee84aa26be2'/>
<id>urn:sha1:0dceeaf599e6d9b8bd908ba4bd3dfee84aa26be2</id>
<content type='text'>
queued_spin_lock_slowpath() should not worry about another
queued_spin_lock_slowpath() running in interrupt context and
changing node-&gt;count by accident, because node-&gt;count keeps
the same value every time we enter/leave queued_spin_lock_slowpath().

On some architectures this_cpu_dec() will save/restore irq flags,
which has high overhead. Use the much cheaper __this_cpu_dec() instead.

Signed-off-by: Pan Xinhui &lt;xinhui.pan@linux.vnet.ibm.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Mike Galbraith &lt;efault@gmx.de&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Waiman.Long@hpe.com
Link: http://lkml.kernel.org/r/1465886247-3773-1-git-send-email-xinhui.pan@linux.vnet.ibm.com
[ Rewrote changelog. ]
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>locking: avoid passing around 'thread_info' in mutex debugging code</title>
<updated>2016-06-23T19:11:17Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2016-06-23T19:11:17Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=6720a305df74ca30bcc10fc316881641b6ff0c80'/>
<id>urn:sha1:6720a305df74ca30bcc10fc316881641b6ff0c80</id>
<content type='text'>
None of the code actually wants a thread_info, it all wants a
task_struct, and it's just converting back and forth between the two
("ti-&gt;task" to get the task_struct from the thread_info, and
"task_thread_info(task)" to go the other way).

No semantic change.

Acked-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>locking/atomic, arch/rwsem: Employ atomic_long_fetch_add()</title>
<updated>2016-06-16T08:48:35Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2016-05-18T10:42:21Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=86a3b5f34fc1fb307abef4fde76bebd3edce0324'/>
<id>urn:sha1:86a3b5f34fc1fb307abef4fde76bebd3edce0324</id>
<content type='text'>
Now that we have fetch_add() we can stop using add_return() - val.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Jason Low &lt;jason.low2@hpe.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Waiman Long &lt;waiman.long@hpe.com&gt;
Cc: linux-arch@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>locking/atomic, arch/qrwlock: Employ atomic_fetch_add_acquire()</title>
<updated>2016-06-16T08:48:34Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2016-04-17T23:27:03Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=f9852b74bec0117b888da39d070c323ea1cb7f4c'/>
<id>urn:sha1:f9852b74bec0117b888da39d070c323ea1cb7f4c</id>
<content type='text'>
The only reason for the current code is to make GCC emit only the
"LOCK XADD" instruction on x86 (and not do a pointless extra ADD on
the result), do so nicer.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Acked-by: Waiman Long &lt;waiman.long@hpe.com&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: linux-arch@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>locking/atomic: Remove the deprecated atomic_{set,clear}_mask() functions</title>
<updated>2016-06-16T08:48:33Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2016-04-17T23:01:27Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=e37837fb62f95a81bdcefa86ceea043df84937d7'/>
<id>urn:sha1:e37837fb62f95a81bdcefa86ceea043df84937d7</id>
<content type='text'>
These functions have been deprecated for a while and there is only the
one user left, convert and kill.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Boqun Feng &lt;boqun.feng@gmail.com&gt;
Cc: Davidlohr Bueso &lt;dave@stgolabs.net&gt;
Cc: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: linux-arch@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>locking/barriers: Introduce smp_acquire__after_ctrl_dep()</title>
<updated>2016-06-14T09:55:14Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2016-05-24T11:17:12Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=33ac279677dcc2441cb93d8cb9cf7a74df62814d'/>
<id>urn:sha1:33ac279677dcc2441cb93d8cb9cf7a74df62814d</id>
<content type='text'>
Introduce smp_acquire__after_ctrl_dep(), this construct is not
uncommon, but the lack of this barrier is.

Use it to better express smp_rmb() uses in WRITE_ONCE(), the IPC
semaphore code and the qspinlock code.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
</feed>
