<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/sched, branch v4.6</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v4.6</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v4.6'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2016-05-11T06:25:53Z</updated>
<entry>
<title>Revert "sched/fair: Fix fairness issue on migration"</title>
<updated>2016-05-11T06:25:53Z</updated>
<author>
<name>Ingo Molnar</name>
<email>mingo@kernel.org</email>
</author>
<published>2016-05-11T06:25:53Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=53d3bc773eaa7ab1cf63585e76af7ee869d5e709'/>
<id>urn:sha1:53d3bc773eaa7ab1cf63585e76af7ee869d5e709</id>
<content type='text'>
Mike reported that this recent commit:

  3a47d5124a95 ("sched/fair: Fix fairness issue on migration")

... broke interactivity and the signal starvation test.

We have a proper fix series in the works but ran out of time for
v4.6, so revert the commit.

Reported-by: Mike Galbraith &lt;efault@gmx.de&gt;
Acked-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>sched/rt, sched/dl: Don't push if task's scheduling class was changed</title>
<updated>2016-05-10T08:02:46Z</updated>
<author>
<name>Xunlei Pang</name>
<email>xlpang@redhat.com</email>
</author>
<published>2016-05-09T04:11:31Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=13b5ab02ae118fc8dfdc2b8597688ec4a11d5b53'/>
<id>urn:sha1:13b5ab02ae118fc8dfdc2b8597688ec4a11d5b53</id>
<content type='text'>
We got this warning:

    WARNING: CPU: 1 PID: 2468 at kernel/sched/core.c:1161 set_task_cpu+0x1af/0x1c0
    [...]
    Call Trace:

    dump_stack+0x63/0x87
    __warn+0xd1/0xf0
    warn_slowpath_null+0x1d/0x20
    set_task_cpu+0x1af/0x1c0
    push_dl_task.part.34+0xea/0x180
    push_dl_tasks+0x17/0x30
    __balance_callback+0x45/0x5c
    __sched_setscheduler+0x906/0xb90
    SyS_sched_setattr+0x150/0x190
    do_syscall_64+0x62/0x110
    entry_SYSCALL64_slow_path+0x25/0x25

This corresponds to:

    WARN_ON_ONCE(p-&gt;state == TASK_RUNNING &amp;&amp;
             p-&gt;sched_class == &amp;fair_sched_class &amp;&amp;
             (p-&gt;on_rq &amp;&amp; !task_on_rq_migrating(p)))

It happens because in find_lock_later_rq(), the task whose scheduling
class was changed to fair class is still pushed away as if it were
a deadline task ...

So, check in find_lock_later_rq() after double_lock_balance(), if the
scheduling class of the deadline task was changed, break and retry.

Apply the same logic to RT tasks.

Signed-off-by: Xunlei Pang &lt;xlpang@redhat.com&gt;
Reviewed-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Acked-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Juri Lelli &lt;juri.lelli@arm.com&gt;
Link: http://lkml.kernel.org/r/1462767091-1215-1-git-send-email-xlpang@redhat.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>sched/fair: Fix !CONFIG_SMP kernel cpufreq governor breakage</title>
<updated>2016-05-07T05:45:34Z</updated>
<author>
<name>Rafael J. Wysocki</name>
<email>rafael.j.wysocki@intel.com</email>
</author>
<published>2016-05-06T12:58:43Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=536bd00cdbb7b908573e5a93bae67b64cbae60d8'/>
<id>urn:sha1:536bd00cdbb7b908573e5a93bae67b64cbae60d8</id>
<content type='text'>
The following commit:

  34e2c555f3e1 ("cpufreq: Add mechanism for registering utilization update callbacks")

overlooked the fact that update_load_avg(), where CFS invokes cpufreq
utilization update callbacks, becomes an empty stub on UP kernels.

In consequence, if !CONFIG_SMP, cpufreq governors are never invoked
from CFS and they do not have a chance to evaluate CPU performace
levels and update them often enough.

Needless to say, things don't work as expected then.

Fix the problem by making the !CONFIG_SMP stub of update_load_avg()
invoke cpufreq update callbacks too.

Reported-by: Steve Muckle &lt;steve.muckle@linaro.org&gt;
Tested-by: Steve Muckle &lt;steve.muckle@linaro.org&gt;
Signed-off-by: Rafael J. Wysocki &lt;rafael.j.wysocki@intel.com&gt;
Acked-by: Steve Muckle &lt;steve.muckle@linaro.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Linux PM list &lt;linux-pm@vger.kernel.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Srinivas Pandruvada &lt;srinivas.pandruvada@linux.intel.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Viresh Kumar &lt;viresh.kumar@linaro.org&gt;
Fixes: 34e2c555f3e1 (cpufreq: Add mechanism for registering utilization update callbacks)
Link: http://lkml.kernel.org/r/6282396.VVEdgVYxO3@vostro.rjw.lan
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>nohz/full, sched/rt: Fix missed tick-reenabling bug in sched_can_stop_tick()</title>
<updated>2016-04-28T08:28:55Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2016-04-21T16:03:15Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=2548d546d40c0014efdde88a53bf7896e917dcce'/>
<id>urn:sha1:2548d546d40c0014efdde88a53bf7896e917dcce</id>
<content type='text'>
Chris Metcalf reported a that sched_can_stop_tick() sometimes fails to
re-enable the tick.

His observed problem is that rq-&gt;cfs.nr_running can be 1 even though
there are multiple runnable CFS tasks. This happens in the cgroup
case, in which case cfs.nr_running is the number of runnable entities
for that level.

If there is a single runnable cgroup (which can have an arbitrary
number of runnable child entries itself) rq-&gt;cfs.nr_running will be 1.

However, looking at that function I think there's more problems with it.

It seems to assume that if there's FIFO tasks, those will run. This is
incorrect. The FIFO task can have a lower prio than an RR task, in which
case the RR task will run.

So the whole fifo_nr_running test seems misplaced, it should go after
the rr_nr_running tests. That is, only if !rr_nr_running, can we use
fifo_nr_running like this.

Reported-by: Chris Metcalf &lt;cmetcalf@mellanox.com&gt;
Tested-by: Chris Metcalf &lt;cmetcalf@mellanox.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: Alexander Shishkin &lt;alexander.shishkin@linux.intel.com&gt;
Cc: Arnaldo Carvalho de Melo &lt;acme@redhat.com&gt;
Cc: Christoph Lameter &lt;cl@linux.com&gt;
Cc: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Jiri Olsa &lt;jolsa@redhat.com&gt;
Cc: Luiz Capitulino &lt;lcapitulino@redhat.com&gt;
Cc: Mike Galbraith &lt;efault@gmx.de&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Rik van Riel &lt;riel@redhat.com&gt;
Cc: Stephane Eranian &lt;eranian@google.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Vince Weaver &lt;vincent.weaver@maine.edu&gt;
Cc: Viresh Kumar &lt;viresh.kumar@linaro.org&gt;
Cc: Wanpeng Li &lt;kernellwp@gmail.com&gt;
Fixes: 76d92ac305f2 ("sched: Migrate sched to use new tick dependency mask model")
Link: http://lkml.kernel.org/r/20160421160315.GK24771@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>locking/atomic, sched: Unexport fetch_or()</title>
<updated>2016-03-29T09:52:11Z</updated>
<author>
<name>Frederic Weisbecker</name>
<email>fweisbec@gmail.com</email>
</author>
<published>2016-03-24T14:38:01Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=5529578a27288d11d4d15635c258c6dde0f0fb10'/>
<id>urn:sha1:5529578a27288d11d4d15635c258c6dde0f0fb10</id>
<content type='text'>
This patch functionally reverts:

  5fd7a09cfb8c ("atomic: Export fetch_or()")

During the merge Linus observed that the generic version of fetch_or()
was messy:

  " This makes the ugly "fetch_or()" macro that the scheduler used
    internally a new generic helper, and does a bad job at it. "

  e23604edac2a Merge branch 'timers-nohz-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Now that we have introduced atomic_fetch_or(), fetch_or() is only used
by the scheduler in order to deal with thread_info flags which type
can vary across architectures.

Lets confine fetch_or() back to the scheduler so that we encourage
future users to use the more robust and well typed atomic_t version
instead.

While at it, fetch_or() gets robustified, pasting improvements from a
previous patch by Ingo Molnar that avoids needless expression
re-evaluations in the loop.

Reported-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Link: http://lkml.kernel.org/r/1458830281-4255-4-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>Merge branch 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip</title>
<updated>2016-03-24T16:42:50Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2016-03-24T16:42:50Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=be53f58fa0fcd97c62a84f2eb98cff528f8b2443'/>
<id>urn:sha1:be53f58fa0fcd97c62a84f2eb98cff528f8b2443</id>
<content type='text'>
Pull scheduler fixes from Ingo Molnar:
 "Misc fixes: a cgroup fix, a fair-scheduler migration accounting fix, a
  cputime fix and two cpuacct cleanups"

* 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  sched/cpuacct: Simplify the cpuacct code
  sched/cpuacct: Rename parameter in cpuusage_write() for readability
  sched/fair: Add comments to explain select_idle_sibling()
  sched/fair: Fix fairness issue on migration
  sched/cgroup: Fix/cleanup cgroup teardown/init
  sched/cputime: Fix steal time accounting vs. CPU hotplug
</content>
</entry>
<entry>
<title>kernel: add kcov code coverage</title>
<updated>2016-03-22T22:36:02Z</updated>
<author>
<name>Dmitry Vyukov</name>
<email>dvyukov@google.com</email>
</author>
<published>2016-03-22T21:27:30Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=5c9a8750a6409c63a0f01d51a9024861022f6593'/>
<id>urn:sha1:5c9a8750a6409c63a0f01d51a9024861022f6593</id>
<content type='text'>
kcov provides code coverage collection for coverage-guided fuzzing
(randomized testing).  Coverage-guided fuzzing is a testing technique
that uses coverage feedback to determine new interesting inputs to a
system.  A notable user-space example is AFL
(http://lcamtuf.coredump.cx/afl/).  However, this technique is not
widely used for kernel testing due to missing compiler and kernel
support.

kcov does not aim to collect as much coverage as possible.  It aims to
collect more or less stable coverage that is function of syscall inputs.
To achieve this goal it does not collect coverage in soft/hard
interrupts and instrumentation of some inherently non-deterministic or
non-interesting parts of kernel is disbled (e.g.  scheduler, locking).

Currently there is a single coverage collection mode (tracing), but the
API anticipates additional collection modes.  Initially I also
implemented a second mode which exposes coverage in a fixed-size hash
table of counters (what Quentin used in his original patch).  I've
dropped the second mode for simplicity.

This patch adds the necessary support on kernel side.  The complimentary
compiler support was added in gcc revision 231296.

We've used this support to build syzkaller system call fuzzer, which has
found 90 kernel bugs in just 2 months:

  https://github.com/google/syzkaller/wiki/Found-Bugs

We've also found 30+ bugs in our internal systems with syzkaller.
Another (yet unexplored) direction where kcov coverage would greatly
help is more traditional "blob mutation".  For example, mounting a
random blob as a filesystem, or receiving a random blob over wire.

Why not gcov.  Typical fuzzing loop looks as follows: (1) reset
coverage, (2) execute a bit of code, (3) collect coverage, repeat.  A
typical coverage can be just a dozen of basic blocks (e.g.  an invalid
input).  In such context gcov becomes prohibitively expensive as
reset/collect coverage steps depend on total number of basic
blocks/edges in program (in case of kernel it is about 2M).  Cost of
kcov depends only on number of executed basic blocks/edges.  On top of
that, kernel requires per-thread coverage because there are always
background threads and unrelated processes that also produce coverage.
With inlined gcov instrumentation per-thread coverage is not possible.

kcov exposes kernel PCs and control flow to user-space which is
insecure.  But debugfs should not be mapped as user accessible.

Based on a patch by Quentin Casasnovas.

[akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
[akpm@linux-foundation.org: unbreak allmodconfig]
[akpm@linux-foundation.org: follow x86 Makefile layout standards]
Signed-off-by: Dmitry Vyukov &lt;dvyukov@google.com&gt;
Reviewed-by: Kees Cook &lt;keescook@chromium.org&gt;
Cc: syzkaller &lt;syzkaller@googlegroups.com&gt;
Cc: Vegard Nossum &lt;vegard.nossum@oracle.com&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: Tavis Ormandy &lt;taviso@google.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Quentin Casasnovas &lt;quentin.casasnovas@oracle.com&gt;
Cc: Kostya Serebryany &lt;kcc@google.com&gt;
Cc: Eric Dumazet &lt;edumazet@google.com&gt;
Cc: Alexander Potapenko &lt;glider@google.com&gt;
Cc: Kees Cook &lt;keescook@google.com&gt;
Cc: Bjorn Helgaas &lt;bhelgaas@google.com&gt;
Cc: Sasha Levin &lt;sasha.levin@oracle.com&gt;
Cc: David Drysdale &lt;drysdale@google.com&gt;
Cc: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Cc: Andrey Ryabinin &lt;ryabinin.a.a@gmail.com&gt;
Cc: Kirill A. Shutemov &lt;kirill@shutemov.name&gt;
Cc: Jiri Slaby &lt;jslaby@suse.cz&gt;
Cc: Ingo Molnar &lt;mingo@elte.hu&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: "H. Peter Anvin" &lt;hpa@zytor.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>sched/cpuacct: Simplify the cpuacct code</title>
<updated>2016-03-21T10:00:28Z</updated>
<author>
<name>Zhao Lei</name>
<email>zhaolei@cn.fujitsu.com</email>
</author>
<published>2016-03-17T04:19:43Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=73e6aafd9ea81498d31361f01db84a0118da2d1c'/>
<id>urn:sha1:73e6aafd9ea81498d31361f01db84a0118da2d1c</id>
<content type='text'>
 - Use for() instead of while() loop in some functions
   to make the code simpler.

 - Use this_cpu_ptr() instead of per_cpu_ptr() to make the code
   cleaner and a bit faster.

Suggested-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Signed-off-by: Zhao Lei &lt;zhaolei@cn.fujitsu.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Tejun Heo &lt;htejun@gmail.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Link: http://lkml.kernel.org/r/d8a7ef9592f55224630cb26dea239f05b6398a4e.1458187654.git.zhaolei@cn.fujitsu.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>sched/cpuacct: Rename parameter in cpuusage_write() for readability</title>
<updated>2016-03-21T09:59:29Z</updated>
<author>
<name>Dongsheng Yang</name>
<email>yangds.fnst@cn.fujitsu.com</email>
</author>
<published>2015-12-21T11:14:42Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=1a736b77a3f50910843d076623204ba6e5057dc1'/>
<id>urn:sha1:1a736b77a3f50910843d076623204ba6e5057dc1</id>
<content type='text'>
The name of the 'reset' parameter to cpuusage_write() is quite confusing,
because the only valid value we allow is '0', so !reset is actually the
case that resets ...

Rename it to 'val' and explain it in a comment that we only allow 0.

Signed-off-by: Dongsheng Yang &lt;yangds.fnst@cn.fujitsu.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: cgroups@vger.kernel.org
Cc: tj@kernel.org
Link: http://lkml.kernel.org/r/1450696483-2864-1-git-send-email-yangds.fnst@cn.fujitsu.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>sched/fair: Add comments to explain select_idle_sibling()</title>
<updated>2016-03-21T09:52:51Z</updated>
<author>
<name>Matt Fleming</name>
<email>matt@codeblueprint.co.uk</email>
</author>
<published>2016-03-09T14:59:08Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=d4335581dc30ec6545999c7443bb9fead274a980'/>
<id>urn:sha1:d4335581dc30ec6545999c7443bb9fead274a980</id>
<content type='text'>
It's not entirely obvious how the main loop in select_idle_sibling()
works on first glance. Sprinkle a few comments to explain the design
and intention behind the loop based on some conversations with Mike
and Peter.

Signed-off-by: Matt Fleming &lt;matt@codeblueprint.co.uk&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Mel Gorman &lt;mgorman@suse.com&gt;
Cc: Mike Galbraith &lt;mgalbraith@suse.de&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Link: http://lkml.kernel.org/r/1457535548-15329-1-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
</feed>
