<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/sched/debug.c, branch v5.14</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v5.14</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v5.14'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2021-06-18T09:31:25Z</updated>
<entry>
<title>Merge branch 'sched/urgent' into sched/core, to resolve conflicts</title>
<updated>2021-06-18T09:31:25Z</updated>
<author>
<name>Ingo Molnar</name>
<email>mingo@kernel.org</email>
</author>
<published>2021-06-18T09:31:25Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=b2c0931a07b7376c6291e0cfb347ad27f7b66263'/>
<id>urn:sha1:b2c0931a07b7376c6291e0cfb347ad27f7b66263</id>
<content type='text'>
This commit in sched/urgent moved the cfs_rq_is_decayed() function:

  a7b359fc6a37: ("sched/fair: Correctly insert cfs_rq's to list on unthrottle")

and this fresh commit in sched/core modified it in the old location:

  9e077b52d86a: ("sched/pelt: Check that *_avg are null when *_sum are")

Merge the two variants.

Conflicts:
	kernel/sched/fair.c

Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>sched/fair: Fix util_est UTIL_AVG_UNCHANGED handling</title>
<updated>2021-06-03T13:47:23Z</updated>
<author>
<name>Dietmar Eggemann</name>
<email>dietmar.eggemann@arm.com</email>
</author>
<published>2021-06-02T14:58:08Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=68d7a190682aa4eb02db477328088ebad15acc83'/>
<id>urn:sha1:68d7a190682aa4eb02db477328088ebad15acc83</id>
<content type='text'>
The util_est internal UTIL_AVG_UNCHANGED flag which is used to prevent
unnecessary util_est updates uses the LSB of util_est.enqueued. It is
exposed via _task_util_est() (and task_util_est()).

Commit 92a801e5d5b7 ("sched/fair: Mask UTIL_AVG_UNCHANGED usages")
mentions that the LSB is lost for util_est resolution but
find_energy_efficient_cpu() checks if task_util_est() returns 0 to
return prev_cpu early.

_task_util_est() returns the max value of util_est.ewma and
util_est.enqueued or'ed w/ UTIL_AVG_UNCHANGED.
So task_util_est() returning the max of task_util() and
_task_util_est() will never return 0 under the default
SCHED_FEAT(UTIL_EST, true).

To fix this use the MSB of util_est.enqueued instead and keep the flag
util_est internal, i.e. don't export it via _task_util_est().

The maximal possible util_avg value for a task is 1024 so the MSB of
'unsigned int util_est.enqueued' isn't used to store a util value.

As a caveat the code behind the util_est_se trace point has to filter
UTIL_AVG_UNCHANGED to see the real util_est.enqueued value which should
be easy to do.

This also fixes an issue report by Xuewen Yan that util_est_update()
only used UTIL_AVG_UNCHANGED for the subtrahend of the equation:

  last_enqueued_diff = ue.enqueued - (task_util() | UTIL_AVG_UNCHANGED)

Fixes: b89997aa88f0b sched/pelt: Fix task util_est update filtering
Signed-off-by: Dietmar Eggemann &lt;dietmar.eggemann@arm.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Reviewed-by: Xuewen Yan &lt;xuewen.yan@unisoc.com&gt;
Reviewed-by: Vincent Donnefort &lt;vincent.donnefort@arm.com&gt;
Reviewed-by: Vincent Guittot &lt;vincent.guittot@linaro.org&gt;
Link: https://lore.kernel.org/r/20210602145808.1562603-1-dietmar.eggemann@arm.com
</content>
</entry>
<entry>
<title>sched: Wrap rq::lock access</title>
<updated>2021-05-12T09:43:26Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2020-11-17T23:19:31Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=5cb9eaa3d274f75539077a28cf01e3563195fa53'/>
<id>urn:sha1:5cb9eaa3d274f75539077a28cf01e3563195fa53</id>
<content type='text'>
In preparation of playing games with rq-&gt;lock, abstract the thing
using an accessor.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Tested-by: Don Hiatt &lt;dhiatt@digitalocean.com&gt;
Tested-by: Hongyu Ning &lt;hongyu.ning@linux.intel.com&gt;
Tested-by: Vincent Guittot &lt;vincent.guittot@linaro.org&gt;
Link: https://lkml.kernel.org/r/20210422123308.136465446@infradead.org
</content>
</entry>
<entry>
<title>sched/debug: Fix cgroup_path[] serialization</title>
<updated>2021-04-21T11:55:42Z</updated>
<author>
<name>Waiman Long</name>
<email>longman@redhat.com</email>
</author>
<published>2021-04-15T19:54:26Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=ad789f84c9a145f8a18744c0387cec22ec51651e'/>
<id>urn:sha1:ad789f84c9a145f8a18744c0387cec22ec51651e</id>
<content type='text'>
The handling of sysrq key can be activated by echoing the key to
/proc/sysrq-trigger or via the magic key sequence typed into a terminal
that is connected to the system in some way (serial, USB or other mean).
In the former case, the handling is done in a user context. In the
latter case, it is likely to be in an interrupt context.

Currently in print_cpu() of kernel/sched/debug.c, sched_debug_lock is
taken with interrupt disabled for the whole duration of the calls to
print_*_stats() and print_rq() which could last for the quite some time
if the information dump happens on the serial console.

If the system has many cpus and the sched_debug_lock is somehow busy
(e.g. parallel sysrq-t), the system may hit a hard lockup panic
depending on the actually serial console implementation of the
system.

The purpose of sched_debug_lock is to serialize the use of the global
cgroup_path[] buffer in print_cpu(). The rests of the printk calls don't
need serialization from sched_debug_lock.

Calling printk() with interrupt disabled can still be problematic if
multiple instances are running. Allocating a stack buffer of PATH_MAX
bytes is not feasible because of the limited size of the kernel stack.

The solution implemented in this patch is to allow only one caller at a
time to use the full size group_path[], while other simultaneous callers
will have to use shorter stack buffers with the possibility of path
name truncation. A "..." suffix will be printed if truncation may have
happened.  The cgroup path name is provided for informational purpose
only, so occasional path name truncation should not be a big problem.

Fixes: efe25c2c7b3a ("sched: Reinstate group names in /proc/sched_debug")
Suggested-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Signed-off-by: Waiman Long &lt;longman@redhat.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://lkml.kernel.org/r/20210415195426.6677-1-longman@redhat.com
</content>
</entry>
<entry>
<title>sched: Warn on long periods of pending need_resched</title>
<updated>2021-04-21T11:55:41Z</updated>
<author>
<name>Paul Turner</name>
<email>pjt@google.com</email>
</author>
<published>2021-04-16T21:29:36Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=c006fac556e401a62054d065da168099ea5a5b10'/>
<id>urn:sha1:c006fac556e401a62054d065da168099ea5a5b10</id>
<content type='text'>
CPU scheduler marks need_resched flag to signal a schedule() on a
particular CPU. But, schedule() may not happen immediately in cases
where the current task is executing in the kernel mode (no
preemption state) for extended periods of time.

This patch adds a warn_on if need_resched is pending for more than the
time specified in sysctl resched_latency_warn_ms. If it goes off, it is
likely that there is a missing cond_resched() somewhere. Monitoring is
done via the tick and the accuracy is hence limited to jiffy scale. This
also means that we won't trigger the warning if the tick is disabled.

This feature (LATENCY_WARN) is default disabled.

Signed-off-by: Paul Turner &lt;pjt@google.com&gt;
Signed-off-by: Josh Don &lt;joshdon@google.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://lkml.kernel.org/r/20210416212936.390566-1-joshdon@google.com
</content>
</entry>
<entry>
<title>sched/debug: Rename the sched_debug parameter to sched_verbose</title>
<updated>2021-04-17T11:22:44Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2021-04-15T16:23:17Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=9406415f46f6127fd31bb66f0260f7a61a8d2786'/>
<id>urn:sha1:9406415f46f6127fd31bb66f0260f7a61a8d2786</id>
<content type='text'>
CONFIG_SCHED_DEBUG is the build-time Kconfig knob, the boot param
sched_debug and the /debug/sched/debug_enabled knobs control the
sched_debug_enabled variable, but what they really do is make
SCHED_DEBUG more verbose, so rename the lot.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
</content>
</entry>
<entry>
<title>sched: Move /proc/sched_debug to debugfs</title>
<updated>2021-04-16T15:06:35Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2021-03-25T14:18:19Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=d27e9ae2f244805bbdc730d85fba28685d2471e5'/>
<id>urn:sha1:d27e9ae2f244805bbdc730d85fba28685d2471e5</id>
<content type='text'>
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Reviewed-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
Tested-by: Valentin Schneider &lt;valentin.schneider@arm.com&gt;
Link: https://lkml.kernel.org/r/20210412102001.548833671@infradead.org
</content>
</entry>
<entry>
<title>sched,debug: Convert sysctl sched_domains to debugfs</title>
<updated>2021-04-16T15:06:35Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2021-03-25T10:31:20Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=3b87f136f8fccddf7da016ab7d04bb3cf9b180f0'/>
<id>urn:sha1:3b87f136f8fccddf7da016ab7d04bb3cf9b180f0</id>
<content type='text'>
Stop polluting sysctl, move to debugfs for SCHED_DEBUG stuff.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Reviewed-by: Dietmar Eggemann &lt;dietmar.eggemann@arm.com&gt;
Reviewed-by: Valentin Schneider &lt;valentin.schneider@arm.com&gt;
Tested-by: Valentin Schneider &lt;valentin.schneider@arm.com&gt;
Link: https://lkml.kernel.org/r/YHgB/s4KCBQ1ifdm@hirez.programming.kicks-ass.net
</content>
</entry>
<entry>
<title>sched,preempt: Move preempt_dynamic to debug.c</title>
<updated>2021-04-16T15:06:34Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2021-03-25T11:21:38Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=1011dcce99f8026d48fdd7b9cc259e32a8b472be'/>
<id>urn:sha1:1011dcce99f8026d48fdd7b9cc259e32a8b472be</id>
<content type='text'>
Move the #ifdef SCHED_DEBUG bits to kernel/sched/debug.c in order to
collect all the debugfs bits.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Tested-by: Valentin Schneider &lt;valentin.schneider@arm.com&gt;
Link: https://lkml.kernel.org/r/20210412102001.353833279@infradead.org
</content>
</entry>
<entry>
<title>sched: Move SCHED_DEBUG sysctl to debugfs</title>
<updated>2021-04-16T15:06:34Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2021-03-24T10:43:21Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=8a99b6833c884fa0e7919030d93fecedc69fc625'/>
<id>urn:sha1:8a99b6833c884fa0e7919030d93fecedc69fc625</id>
<content type='text'>
Stop polluting sysctl with undocumented knobs that really are debug
only, move them all to /debug/sched/ along with the existing
/debug/sched_* files that already exist.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Reviewed-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
Tested-by: Valentin Schneider &lt;valentin.schneider@arm.com&gt;
Link: https://lkml.kernel.org/r/20210412102001.287610138@infradead.org
</content>
</entry>
</feed>
