<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/sched/clock.c, branch v4.5</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v4.5</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v4.5'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2016-01-12T02:53:13Z</updated>
<entry>
<title>Merge branch 'for-4.5' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq</title>
<updated>2016-01-12T02:53:13Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2016-01-12T02:53:13Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=0f8c7901039f8b1366ae364462743c8f4125822e'/>
<id>urn:sha1:0f8c7901039f8b1366ae364462743c8f4125822e</id>
<content type='text'>
Pull workqueue update from Tejun Heo:
 "Workqueue changes for v4.5.  One cleanup patch and three to improve
  the debuggability.

  Workqueue now has a stall detector which dumps workqueue state if any
  worker pool hasn't made forward progress over a certain amount of time
  (30s by default) and also triggers a warning if a workqueue which can
  be used in memory reclaim path tries to wait on something which can't
  be.

  These should make workqueue hangs a lot easier to debug."

* 'for-4.5' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq:
  workqueue: simplify the apply_workqueue_attrs_locked()
  workqueue: implement lockup detector
  watchdog: introduce touch_softlockup_watchdog_sched()
  workqueue: warn if memory reclaim tries to flush !WQ_MEM_RECLAIM workqueue
</content>
</entry>
<entry>
<title>watchdog: introduce touch_softlockup_watchdog_sched()</title>
<updated>2015-12-08T16:29:42Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2015-12-08T16:28:04Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=03e0d4610bf4d4a93bfa16b2474ed4fd5243aa71'/>
<id>urn:sha1:03e0d4610bf4d4a93bfa16b2474ed4fd5243aa71</id>
<content type='text'>
touch_softlockup_watchdog() is used to tell watchdog that scheduler
stall is expected.  One group of usage is from paths where the task
may not be able to yield for a long time such as performing slow PIO
to finicky device and coming out of suspend.  The other is to account
for scheduler and timer going idle.

For scheduler softlockup detection, there's no reason to distinguish
the two cases; however, workqueue lockup detector is planned and it
can use the same signals from the former group while the latter would
spuriously prevent detection.  This patch introduces a new function
touch_softlockup_watchdog_sched() and convert the latter group to call
it instead.  For now, it just calls touch_softlockup_watchdog() and
there's no functional difference.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Cc: Ulrich Obergfell &lt;uobergfe@redhat.com&gt;
Cc: Ingo Molnar &lt;mingo@redhat.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>treewide: Remove old email address</title>
<updated>2015-11-23T08:44:58Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2015-11-16T10:08:45Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=90eec103b96e30401c0b846045bf8a1c7159b6da'/>
<id>urn:sha1:90eec103b96e30401c0b846045bf8a1c7159b6da</id>
<content type='text'>
There were still a number of references to my old Red Hat email
address in the kernel source. Remove these while keeping the
Red Hat copyright notices intact.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: Arnaldo Carvalho de Melo &lt;acme@redhat.com&gt;
Cc: Jiri Olsa &lt;jolsa@redhat.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Mike Galbraith &lt;efault@gmx.de&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Stephane Eranian &lt;eranian@google.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Vince Weaver &lt;vincent.weaver@maine.edu&gt;
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>kernel/sched/clock.c: add another clock for use with the soft lockup watchdog</title>
<updated>2015-02-13T02:54:13Z</updated>
<author>
<name>Cyril Bur</name>
<email>cyrilbur@gmail.com</email>
</author>
<published>2015-02-12T23:01:24Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=545a2bf742fb41f17d03486dd8a8c74ad511dec2'/>
<id>urn:sha1:545a2bf742fb41f17d03486dd8a8c74ad511dec2</id>
<content type='text'>
When the hypervisor pauses a virtualised kernel the kernel will observe a
jump in timebase, this can cause spurious messages from the softlockup
detector.

Whilst these messages are harmless, they are accompanied with a stack
trace which causes undue concern and more problematically the stack trace
in the guest has nothing to do with the observed problem and can only be
misleading.

Futhermore, on POWER8 this is completely avoidable with the introduction
of the Virtual Time Base (VTB) register.

This patch (of 2):

This permits the use of arch specific clocks for which virtualised kernels
can use their notion of 'running' time, not the elpased wall time which
will include host execution time.

Signed-off-by: Cyril Bur &lt;cyrilbur@gmail.com&gt;
Cc: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Cc: Andrew Jones &lt;drjones@redhat.com&gt;
Acked-by: Don Zickus &lt;dzickus@redhat.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Ulrich Obergfell &lt;uobergfe@redhat.com&gt;
Cc: chai wen &lt;chaiw.fnst@cn.fujitsu.com&gt;
Cc: Fabian Frederick &lt;fabf@skynet.be&gt;
Cc: Aaron Tomlin &lt;atomlin@redhat.com&gt;
Cc: Ben Zhang &lt;benzh@chromium.org&gt;
Cc: Martin Schwidefsky &lt;schwidefsky@de.ibm.com&gt;
Cc: John Stultz &lt;john.stultz@linaro.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>time: Replace __get_cpu_var uses</title>
<updated>2014-08-26T17:45:44Z</updated>
<author>
<name>Christoph Lameter</name>
<email>cl@linux.com</email>
</author>
<published>2014-08-17T17:30:25Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=22127e93c587afa01e4f7225d2d1cf1d26ae7dfe'/>
<id>urn:sha1:22127e93c587afa01e4f7225d2d1cf1d26ae7dfe</id>
<content type='text'>
Convert uses of __get_cpu_var for creating a address from a percpu
offset to this_cpu_ptr.

The two cases where get_cpu_var is used to actually access a percpu
variable are changed to use this_cpu_read/raw_cpu_read.

Reviewed-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Christoph Lameter &lt;cl@linux.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
</entry>
<entry>
<title>kernel: use macros from compiler.h instead of __attribute__((...))</title>
<updated>2014-04-07T23:36:11Z</updated>
<author>
<name>Gideon Israel Dsouza</name>
<email>gidisrael@gmail.com</email>
</author>
<published>2014-04-07T22:39:20Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=52f5684c8e1ec7463192aba8e2916df49807511a'/>
<id>urn:sha1:52f5684c8e1ec7463192aba8e2916df49807511a</id>
<content type='text'>
To increase compiler portability there is &lt;linux/compiler.h&gt; which
provides convenience macros for various gcc constructs.  Eg: __weak for
__attribute__((weak)).  I've replaced all instances of gcc attributes
with the right macro in the kernel subsystem.

Signed-off-by: Gideon Israel Dsouza &lt;gidisrael@gmail.com&gt;
Cc: "Rafael J. Wysocki" &lt;rjw@sisk.pl&gt;
Cc: Ingo Molnar &lt;mingo@elte.hu&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>sched/clock: Prevent tracing recursion in sched_clock_cpu()</title>
<updated>2014-03-11T10:33:48Z</updated>
<author>
<name>Fernando Luis Vazquez Cao</name>
<email>fernando@oss.ntt.co.jp</email>
</author>
<published>2014-03-06T05:25:28Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=96b3d28bf4b00f62fc8386ff5d487d1830793a3d'/>
<id>urn:sha1:96b3d28bf4b00f62fc8386ff5d487d1830793a3d</id>
<content type='text'>
Prevent tracing of preempt_disable/enable() in sched_clock_cpu().
When CONFIG_DEBUG_PREEMPT is enabled, preempt_disable/enable() are
traced and this causes trace_clock() users (and probably others) to
go into an infinite recursion. Systems with a stable sched_clock()
are not affected.

This problem is similar to that fixed by upstream commit 95ef1e52922
("KVM guest: prevent tracing recursion with kvmclock").

Signed-off-by: Fernando Luis Vazquez Cao &lt;fernando@oss.ntt.co.jp&gt;
Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Acked-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Link: http://lkml.kernel.org/r/1394083528.4524.3.camel@nexus
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>sched/clock: Fixup early initialization</title>
<updated>2014-01-23T13:48:36Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2014-01-22T11:59:18Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=d375b4e0fa3771343b370be0d876a1963c02e0a0'/>
<id>urn:sha1:d375b4e0fa3771343b370be0d876a1963c02e0a0</id>
<content type='text'>
The code would assume sched_clock_stable() and switch to !stable
later, this switch brings a discontinuity in time.

The discontinuity on switching from stable to unstable was always
present, but previously we would set stable/unstable before
initializing TSC and usually stick to the one we start out with.

So the static_key bits brought an extra switch where there previously
wasn't one.

Things are further complicated by the fact that we cannot use
static_key as early as we usually call set_sched_clock_stable().

Fix things by tracking the stable state in a regular variable and only
set the static_key to the right state on sched_clock_init(), which is
ran right after late_time_init-&gt;tsc_init().

Before this we would not be using the TSC anyway.

Reported-and-Tested-by: Sasha Levin &lt;sasha.levin@oracle.com&gt;
Reported-by: dyoung@redhat.com
Fixes: 35af99e646c7 ("sched/clock, x86: Use a static_key for sched_clock_stable")
Cc: jacob.jun.pan@linux.intel.com
Cc: Mike Galbraith &lt;bitbucket@online.de&gt;
Cc: hpa@zytor.com
Cc: paulmck@linux.vnet.ibm.com
Cc: John Stultz &lt;john.stultz@linaro.org&gt;
Cc: Andy Lutomirski &lt;luto@amacapital.net&gt;
Cc: Arjan van de Ven &lt;arjan@linux.intel.com&gt;
Cc: lenb@kernel.org
Cc: rjw@rjwysocki.net
Cc: Eliezer Tamir &lt;eliezer.tamir@linux.intel.com&gt;
Cc: rui.zhang@intel.com
Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Link: http://lkml.kernel.org/r/20140122115918.GG3694@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>sched/clock: Fix up clear_sched_clock_stable()</title>
<updated>2014-01-13T14:13:15Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2013-12-11T17:55:53Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=6577e42a3e1633afe762f47da9e00061ee4b9a5e'/>
<id>urn:sha1:6577e42a3e1633afe762f47da9e00061ee4b9a5e</id>
<content type='text'>
The below tells us the static_key conversion has a problem; since the
exact point of clearing that flag isn't too important, delay the flip
and use a workqueue to process it.

[ ] TSC synchronization [CPU#0 -&gt; CPU#22]:
[ ] Measured 8 cycles TSC warp between CPUs, turning off TSC clock.
[ ]
[ ] ======================================================
[ ] [ INFO: possible circular locking dependency detected ]
[ ] 3.13.0-rc3-01745-g848b0d0322cb-dirty #637 Not tainted
[ ] -------------------------------------------------------
[ ] swapper/0/1 is trying to acquire lock:
[ ]  (jump_label_mutex){+.+...}, at: [&lt;ffffffff8115a637&gt;] jump_label_lock+0x17/0x20
[ ]
[ ] but task is already holding lock:
[ ]  (cpu_hotplug.lock){+.+.+.}, at: [&lt;ffffffff8109408b&gt;] cpu_hotplug_begin+0x2b/0x60
[ ]
[ ] which lock already depends on the new lock.
[ ]
[ ]
[ ] the existing dependency chain (in reverse order) is:
[ ]
[ ] -&gt; #1 (cpu_hotplug.lock){+.+.+.}:
[ ]        [&lt;ffffffff810def00&gt;] lock_acquire+0x90/0x130
[ ]        [&lt;ffffffff81661f83&gt;] mutex_lock_nested+0x63/0x3e0
[ ]        [&lt;ffffffff81093fdc&gt;] get_online_cpus+0x3c/0x60
[ ]        [&lt;ffffffff8104cc67&gt;] arch_jump_label_transform+0x37/0x130
[ ]        [&lt;ffffffff8115a3cf&gt;] __jump_label_update+0x5f/0x80
[ ]        [&lt;ffffffff8115a48d&gt;] jump_label_update+0x9d/0xb0
[ ]        [&lt;ffffffff8115aa6d&gt;] static_key_slow_inc+0x9d/0xb0
[ ]        [&lt;ffffffff810c0f65&gt;] sched_feat_set+0xf5/0x100
[ ]        [&lt;ffffffff810c5bdc&gt;] set_numabalancing_state+0x2c/0x30
[ ]        [&lt;ffffffff81d12f3d&gt;] numa_policy_init+0x1af/0x1b7
[ ]        [&lt;ffffffff81cebdf4&gt;] start_kernel+0x35d/0x41f
[ ]        [&lt;ffffffff81ceb5a5&gt;] x86_64_start_reservations+0x2a/0x2c
[ ]        [&lt;ffffffff81ceb6a2&gt;] x86_64_start_kernel+0xfb/0xfe
[ ]
[ ] -&gt; #0 (jump_label_mutex){+.+...}:
[ ]        [&lt;ffffffff810de141&gt;] __lock_acquire+0x1701/0x1eb0
[ ]        [&lt;ffffffff810def00&gt;] lock_acquire+0x90/0x130
[ ]        [&lt;ffffffff81661f83&gt;] mutex_lock_nested+0x63/0x3e0
[ ]        [&lt;ffffffff8115a637&gt;] jump_label_lock+0x17/0x20
[ ]        [&lt;ffffffff8115aa3b&gt;] static_key_slow_inc+0x6b/0xb0
[ ]        [&lt;ffffffff810ca775&gt;] clear_sched_clock_stable+0x15/0x20
[ ]        [&lt;ffffffff810503b3&gt;] mark_tsc_unstable+0x23/0x70
[ ]        [&lt;ffffffff810772cb&gt;] check_tsc_sync_source+0x14b/0x150
[ ]        [&lt;ffffffff81076612&gt;] native_cpu_up+0x3a2/0x890
[ ]        [&lt;ffffffff810941cb&gt;] _cpu_up+0xdb/0x160
[ ]        [&lt;ffffffff810942c9&gt;] cpu_up+0x79/0x90
[ ]        [&lt;ffffffff81d0af6b&gt;] smp_init+0x60/0x8c
[ ]        [&lt;ffffffff81cebf42&gt;] kernel_init_freeable+0x8c/0x197
[ ]        [&lt;ffffffff8164e32e&gt;] kernel_init+0xe/0x130
[ ]        [&lt;ffffffff8166beec&gt;] ret_from_fork+0x7c/0xb0
[ ]
[ ] other info that might help us debug this:
[ ]
[ ]  Possible unsafe locking scenario:
[ ]
[ ]        CPU0                    CPU1
[ ]        ----                    ----
[ ]   lock(cpu_hotplug.lock);
[ ]                                lock(jump_label_mutex);
[ ]                                lock(cpu_hotplug.lock);
[ ]   lock(jump_label_mutex);
[ ]
[ ]  *** DEADLOCK ***
[ ]
[ ] 2 locks held by swapper/0/1:
[ ]  #0:  (cpu_add_remove_lock){+.+.+.}, at: [&lt;ffffffff81094037&gt;] cpu_maps_update_begin+0x17/0x20
[ ]  #1:  (cpu_hotplug.lock){+.+.+.}, at: [&lt;ffffffff8109408b&gt;] cpu_hotplug_begin+0x2b/0x60
[ ]
[ ] stack backtrace:
[ ] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.13.0-rc3-01745-g848b0d0322cb-dirty #637
[ ] Hardware name: Supermicro X8DTN/X8DTN, BIOS 4.6.3 01/08/2010
[ ]  ffffffff82c9c270 ffff880236843bb8 ffffffff8165c5f5 ffffffff82c9c270
[ ]  ffff880236843bf8 ffffffff81658c02 ffff880236843c80 ffff8802368586a0
[ ]  ffff880236858678 0000000000000001 0000000000000002 ffff880236858000
[ ] Call Trace:
[ ]  [&lt;ffffffff8165c5f5&gt;] dump_stack+0x4e/0x7a
[ ]  [&lt;ffffffff81658c02&gt;] print_circular_bug+0x1f9/0x207
[ ]  [&lt;ffffffff810de141&gt;] __lock_acquire+0x1701/0x1eb0
[ ]  [&lt;ffffffff816680ff&gt;] ? __atomic_notifier_call_chain+0x8f/0xb0
[ ]  [&lt;ffffffff810def00&gt;] lock_acquire+0x90/0x130
[ ]  [&lt;ffffffff8115a637&gt;] ? jump_label_lock+0x17/0x20
[ ]  [&lt;ffffffff8115a637&gt;] ? jump_label_lock+0x17/0x20
[ ]  [&lt;ffffffff81661f83&gt;] mutex_lock_nested+0x63/0x3e0
[ ]  [&lt;ffffffff8115a637&gt;] ? jump_label_lock+0x17/0x20
[ ]  [&lt;ffffffff8115a637&gt;] jump_label_lock+0x17/0x20
[ ]  [&lt;ffffffff8115aa3b&gt;] static_key_slow_inc+0x6b/0xb0
[ ]  [&lt;ffffffff810ca775&gt;] clear_sched_clock_stable+0x15/0x20
[ ]  [&lt;ffffffff810503b3&gt;] mark_tsc_unstable+0x23/0x70
[ ]  [&lt;ffffffff810772cb&gt;] check_tsc_sync_source+0x14b/0x150
[ ]  [&lt;ffffffff81076612&gt;] native_cpu_up+0x3a2/0x890
[ ]  [&lt;ffffffff810941cb&gt;] _cpu_up+0xdb/0x160
[ ]  [&lt;ffffffff810942c9&gt;] cpu_up+0x79/0x90
[ ]  [&lt;ffffffff81d0af6b&gt;] smp_init+0x60/0x8c
[ ]  [&lt;ffffffff81cebf42&gt;] kernel_init_freeable+0x8c/0x197
[ ]  [&lt;ffffffff8164e320&gt;] ? rest_init+0xd0/0xd0
[ ]  [&lt;ffffffff8164e32e&gt;] kernel_init+0xe/0x130
[ ]  [&lt;ffffffff8166beec&gt;] ret_from_fork+0x7c/0xb0
[ ]  [&lt;ffffffff8164e320&gt;] ? rest_init+0xd0/0xd0
[ ] ------------[ cut here ]------------
[ ] WARNING: CPU: 0 PID: 1 at /usr/src/linux-2.6/kernel/smp.c:374 smp_call_function_many+0xad/0x300()
[ ] Modules linked in:
[ ] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.13.0-rc3-01745-g848b0d0322cb-dirty #637
[ ] Hardware name: Supermicro X8DTN/X8DTN, BIOS 4.6.3 01/08/2010
[ ]  0000000000000009 ffff880236843be0 ffffffff8165c5f5 0000000000000000
[ ]  ffff880236843c18 ffffffff81093d8c 0000000000000000 0000000000000000
[ ]  ffffffff81ccd1a0 ffffffff810ca951 0000000000000000 ffff880236843c28
[ ] Call Trace:
[ ]  [&lt;ffffffff8165c5f5&gt;] dump_stack+0x4e/0x7a
[ ]  [&lt;ffffffff81093d8c&gt;] warn_slowpath_common+0x8c/0xc0
[ ]  [&lt;ffffffff810ca951&gt;] ? sched_clock_tick+0x1/0xa0
[ ]  [&lt;ffffffff81093dda&gt;] warn_slowpath_null+0x1a/0x20
[ ]  [&lt;ffffffff8110b72d&gt;] smp_call_function_many+0xad/0x300
[ ]  [&lt;ffffffff8104f200&gt;] ? arch_unregister_cpu+0x30/0x30
[ ]  [&lt;ffffffff8104f200&gt;] ? arch_unregister_cpu+0x30/0x30
[ ]  [&lt;ffffffff810ca951&gt;] ? sched_clock_tick+0x1/0xa0
[ ]  [&lt;ffffffff8110ba96&gt;] smp_call_function+0x46/0x80
[ ]  [&lt;ffffffff8104f200&gt;] ? arch_unregister_cpu+0x30/0x30
[ ]  [&lt;ffffffff8110bb3c&gt;] on_each_cpu+0x3c/0xa0
[ ]  [&lt;ffffffff810ca950&gt;] ? sched_clock_idle_sleep_event+0x20/0x20
[ ]  [&lt;ffffffff810ca951&gt;] ? sched_clock_tick+0x1/0xa0
[ ]  [&lt;ffffffff8104f964&gt;] text_poke_bp+0x64/0xd0
[ ]  [&lt;ffffffff810ca950&gt;] ? sched_clock_idle_sleep_event+0x20/0x20
[ ]  [&lt;ffffffff8104ccde&gt;] arch_jump_label_transform+0xae/0x130
[ ]  [&lt;ffffffff8115a3cf&gt;] __jump_label_update+0x5f/0x80
[ ]  [&lt;ffffffff8115a48d&gt;] jump_label_update+0x9d/0xb0
[ ]  [&lt;ffffffff8115aa6d&gt;] static_key_slow_inc+0x9d/0xb0
[ ]  [&lt;ffffffff810ca775&gt;] clear_sched_clock_stable+0x15/0x20
[ ]  [&lt;ffffffff810503b3&gt;] mark_tsc_unstable+0x23/0x70
[ ]  [&lt;ffffffff810772cb&gt;] check_tsc_sync_source+0x14b/0x150
[ ]  [&lt;ffffffff81076612&gt;] native_cpu_up+0x3a2/0x890
[ ]  [&lt;ffffffff810941cb&gt;] _cpu_up+0xdb/0x160
[ ]  [&lt;ffffffff810942c9&gt;] cpu_up+0x79/0x90
[ ]  [&lt;ffffffff81d0af6b&gt;] smp_init+0x60/0x8c
[ ]  [&lt;ffffffff81cebf42&gt;] kernel_init_freeable+0x8c/0x197
[ ]  [&lt;ffffffff8164e320&gt;] ? rest_init+0xd0/0xd0
[ ]  [&lt;ffffffff8164e32e&gt;] kernel_init+0xe/0x130
[ ]  [&lt;ffffffff8166beec&gt;] ret_from_fork+0x7c/0xb0
[ ]  [&lt;ffffffff8164e320&gt;] ? rest_init+0xd0/0xd0
[ ] ---[ end trace 6ff1df5620c49d26 ]---
[ ] tsc: Marking TSC unstable due to check_tsc_sync_source failed

Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Link: http://lkml.kernel.org/n/tip-v55fgqj3nnyqnngmvuu8ep6h@git.kernel.org
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>sched/clock, x86: Use a static_key for sched_clock_stable</title>
<updated>2014-01-13T14:13:13Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2013-11-28T18:38:42Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=35af99e646c7f7ea46dc2977601e9e71a51dadd5'/>
<id>urn:sha1:35af99e646c7f7ea46dc2977601e9e71a51dadd5</id>
<content type='text'>
In order to avoid the runtime condition and variable load turn
sched_clock_stable into a static_key.

Also provide a shorter implementation of local_clock() and
cpu_clock(int) when sched_clock_stable==1.

                        MAINLINE   PRE       POST

    sched_clock_stable: 1          1         1
    (cold) sched_clock: 329841     221876    215295
    (cold) local_clock: 301773     234692    220773
    (warm) sched_clock: 38375      25602     25659
    (warm) local_clock: 100371     33265     27242
    (warm) rdtsc:       27340      24214     24208
    sched_clock_stable: 0          0         0
    (cold) sched_clock: 382634     235941    237019
    (cold) local_clock: 396890     297017    294819
    (warm) sched_clock: 38194      25233     25609
    (warm) local_clock: 143452     71234     71232
    (warm) rdtsc:       27345      24245     24243

Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Link: http://lkml.kernel.org/n/tip-eummbdechzz37mwmpags1gjr@git.kernel.org
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
</feed>
