<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/sched_rt.c, branch v2.6.26</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v2.6.26</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v2.6.26'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2008-06-20T09:00:19Z</updated>
<entry>
<title>sched: rt: dont stop the period timer when there are tasks wanting to run</title>
<updated>2008-06-20T09:00:19Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2008-06-19T12:22:28Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=8a8cde163ea724baf74e7752a31a69d3121a240e'/>
<id>urn:sha1:8a8cde163ea724baf74e7752a31a69d3121a240e</id>
<content type='text'>
So if the group ever gets throttled, it will never wake up again.

Reported-by: "Daniel K." &lt;dk@uw.no&gt;
Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Tested-by: Daniel K. &lt;dk@uw.no&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: rt-group: fix RR buglet</title>
<updated>2008-06-19T07:06:59Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2008-06-19T07:06:59Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=15a8641eadb492ef7c5489faa25256967bdfd303'/>
<id>urn:sha1:15a8641eadb492ef7c5489faa25256967bdfd303</id>
<content type='text'>
In tick_task_rt() we first call update_curr_rt() which can dequeue a runqueue
due to it running out of runtime, and then we try to requeue it, of it also
having exhausted its RR quota. Obviously requeueing something that is no longer
on the runqueue will not have the expected result.

Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Tested-by: Daniel K. &lt;dk@uw.no&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: rt-group: heirarchy aware throttle</title>
<updated>2008-06-19T07:06:57Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2008-06-19T07:06:57Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=ad2a3f13b7258a5daaaeb8cff9f835aac468b71d'/>
<id>urn:sha1:ad2a3f13b7258a5daaaeb8cff9f835aac468b71d</id>
<content type='text'>
The bandwidth throttle code dequeues a group when it runs out of quota, and
re-queues it once the period rolls over and the quota gets refreshed.

Sadly it failed to take the hierarchy into consideration. Share more of the
enqueue/dequeue code with regular task opterations.

Also, some operations like sched_setscheduler() can dequeue/enqueue tasks that
are in throttled runqueues, we should not inadvertly re-enqueue empty runqueues
so check for that.

Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Tested-by: Daniel K. &lt;dk@uw.no&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>revert ("sched: fair-group: SMP-nice for group scheduling")</title>
<updated>2008-05-29T09:28:57Z</updated>
<author>
<name>Ingo Molnar</name>
<email>mingo@elte.hu</email>
</author>
<published>2008-05-29T09:28:57Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=6363ca57c76b7b83639ca8c83fc285fa26a7880e'/>
<id>urn:sha1:6363ca57c76b7b83639ca8c83fc285fa26a7880e</id>
<content type='text'>
Yanmin Zhang reported:

Comparing with 2.6.25, volanoMark has big regression with kernel 2.6.26-rc1.
It's about 50% on my 8-core stoakley, 16-core tigerton, and Itanium Montecito.

With bisect, I located the following patch:

| 18d95a2832c1392a2d63227a7a6d433cb9f2037e is first bad commit
| commit 18d95a2832c1392a2d63227a7a6d433cb9f2037e
| Author: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
| Date:   Sat Apr 19 19:45:00 2008 +0200
|
|     sched: fair-group: SMP-nice for group scheduling

Revert it so that we get v2.6.25 behavior.

Bisected-by: Yanmin Zhang &lt;yanmin_zhang@linux.intel.com&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: fix RT task-wakeup logic</title>
<updated>2008-05-05T21:56:18Z</updated>
<author>
<name>Gregory Haskins</name>
<email>ghaskins@novell.com</email>
</author>
<published>2008-04-23T11:13:29Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=8ae121ac8666b0421aa20fd80d4597ec66fa54bc'/>
<id>urn:sha1:8ae121ac8666b0421aa20fd80d4597ec66fa54bc</id>
<content type='text'>
Dmitry Adamushko pointed out a logic error in task_wake_up_rt() where we
will always evaluate to "true".  You can find the thread here:

http://lkml.org/lkml/2008/4/22/296

In reality, we only want to try to push tasks away when a wake up request is
not going to preempt the current task.  So lets fix it.

Note: We introduce test_tsk_need_resched() instead of open-coding the flag
check so that the merge-conflict with -rt should help remind us that we
may need to support NEEDS_RESCHED_DELAYED in the future, too.

Signed-off-by: Gregory Haskins &lt;ghaskins@novell.com&gt;
CC: Dmitry Adamushko &lt;dmitry.adamushko@gmail.com&gt;
CC: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: make rt_sched_class, idle_sched_class static</title>
<updated>2008-05-05T21:56:17Z</updated>
<author>
<name>Harvey Harrison</name>
<email>harvey.harrison@gmail.com</email>
</author>
<published>2008-04-25T17:53:13Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=2abdad0a4cd8f9413f778cc998e0ee7d60b28417'/>
<id>urn:sha1:2abdad0a4cd8f9413f778cc998e0ee7d60b28417</id>
<content type='text'>
The C files are included directly in sched.c, so they are
effectively static.

Signed-off-by: Harvey Harrison &lt;harvey.harrison@gmail.com&gt;
Acked-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: rt-group: optimize dequeue_rt_stack</title>
<updated>2008-04-19T17:45:00Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2008-04-19T17:45:00Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=58d6c2d72f8628f39e8689fbde8aa177fcf00a37'/>
<id>urn:sha1:58d6c2d72f8628f39e8689fbde8aa177fcf00a37</id>
<content type='text'>
Now that the group hierarchy can have an arbitrary depth the O(n^2) nature
of RT task dequeues will really hurt. Optimize this by providing space to
store the tree path, so we can walk it the other way.

Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: fair-group: SMP-nice for group scheduling</title>
<updated>2008-04-19T17:45:00Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2008-04-19T17:45:00Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=18d95a2832c1392a2d63227a7a6d433cb9f2037e'/>
<id>urn:sha1:18d95a2832c1392a2d63227a7a6d433cb9f2037e</id>
<content type='text'>
Implement SMP nice support for the full group hierarchy.

On each load-balance action, compile a sched_domain wide view of the full
task_group tree. We compute the domain wide view when walking down the
hierarchy, and readjust the weights when walking back up.

After collecting and readjusting the domain wide view, we try to balance the
tasks within the task_groups. The current approach is a naively balance each
task group until we've moved the targeted amount of load.

Inspired by Srivatsa Vaddsgiri's previous code and Abhishek Chandra's H-SMP
paper.

XXX: there will be some numerical issues due to the limited nature of
     SCHED_LOAD_SCALE wrt to representing a task_groups influence on the
     total weight. When the tree is deep enough, or the task weight small
     enough, we'll run out of bits.

Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
CC: Abhishek Chandra &lt;chandra@cs.umn.edu&gt;
CC: Srivatsa Vaddagiri &lt;vatsa@linux.vnet.ibm.com&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: mix tasks and groups</title>
<updated>2008-04-19T17:44:59Z</updated>
<author>
<name>Dhaval Giani</name>
<email>dhaval@linux.vnet.ibm.com</email>
</author>
<published>2008-04-19T17:44:59Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=354d60c2ff72d86627dfe2089d186824abf4bb8e'/>
<id>urn:sha1:354d60c2ff72d86627dfe2089d186824abf4bb8e</id>
<content type='text'>
This patch allows tasks and groups to exist in the same cfs_rq. With this
change the CFS group scheduling follows a 1/(M+N) model from a 1/(1+N)
fairness model where M tasks and N groups exist at the cfs_rq level.

[a.p.zijlstra@chello.nl: rt bits and assorted fixes]
Signed-off-by: Dhaval Giani &lt;dhaval@linux.vnet.ibm.com&gt;
Signed-off-by: Srivatsa Vaddagiri &lt;vatsa@linux.vnet.ibm.com&gt;
Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: add new set_cpus_allowed_ptr function</title>
<updated>2008-04-19T17:44:59Z</updated>
<author>
<name>Mike Travis</name>
<email>travis@sgi.com</email>
</author>
<published>2008-03-26T21:23:49Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=cd8ba7cd9be0192348c2836cb6645d9b2cd2bfd2'/>
<id>urn:sha1:cd8ba7cd9be0192348c2836cb6645d9b2cd2bfd2</id>
<content type='text'>
Add a new function that accepts a pointer to the "newly allowed cpus"
cpumask argument.

int set_cpus_allowed_ptr(struct task_struct *p, const cpumask_t *new_mask)

The current set_cpus_allowed() function is modified to use the above
but this does not result in an ABI change.  And with some compiler
optimization help, it may not introduce any additional overhead.

Additionally, to enforce the read only nature of the new_mask arg, the
"const" property is migrated to sub-functions called by set_cpus_allowed.
This silences compiler warnings.

Signed-off-by: Mike Travis &lt;travis@sgi.com&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
</feed>
