<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/stop_machine.c, branch v2.6.28</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v2.6.28</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v2.6.28'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2008-11-16T23:09:52Z</updated>
<entry>
<title>stop_machine: fix race with return value (fixes Bug #11989)</title>
<updated>2008-11-16T23:09:52Z</updated>
<author>
<name>Rusty Russell</name>
<email>rusty@rustcorp.com.au</email>
</author>
<published>2008-11-16T21:52:18Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=e14c8bf86350f6c39186a139c5c584a6111b2f01'/>
<id>urn:sha1:e14c8bf86350f6c39186a139c5c584a6111b2f01</id>
<content type='text'>
Bug #11989: Suspend failure on NForce4-based boards due to chanes in
stop_machine

We should not access active.fnret outside the lock; in theory the next
stop_machine could overwrite it.

Signed-off-by: Rusty Russell &lt;rusty@rustcorp.com.au&gt;
Tested-by: "Rafael J. Wysocki" &lt;rjw@sisk.pl&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>Revert "Call init_workqueues before pre smp initcalls."</title>
<updated>2008-10-26T02:53:38Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2008-10-26T02:53:38Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=4403b406d4369a275d483ece6ddee0088cc0d592'/>
<id>urn:sha1:4403b406d4369a275d483ece6ddee0088cc0d592</id>
<content type='text'>
This reverts commit a802dd0eb5fc97a50cf1abb1f788a8f6cc5db635 by moving
the call to init_workqueues() back where it belongs - after SMP has been
initialized.

It also moves stop_machine_init() - which needs workqueues - to a later
phase using a core_initcall() instead of early_initcall().  That should
satisfy all ordering requirements, and was apparently the reason why
init_workqueues() was moved to be too early.

Cc: Heiko Carstens &lt;heiko.carstens@de.ibm.com&gt;
Cc: Rusty Russell &lt;rusty@rustcorp.com.au&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>stop_machine: fix error code handling on multiple cpus</title>
<updated>2008-10-21T23:00:26Z</updated>
<author>
<name>Heiko Carstens</name>
<email>heiko.carstens@de.ibm.com</email>
</author>
<published>2008-10-22T15:00:26Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=8163bcac779f62c6bf847caed9bce905db0693fb'/>
<id>urn:sha1:8163bcac779f62c6bf847caed9bce905db0693fb</id>
<content type='text'>
Using |= for updating a value which might be updated on several cpus
concurrently will not always work since we need to make sure that the
update happens atomically.
To fix this just use a write if the called function returns an error
code on a cpu. We end up writing the error code of an arbitrary cpu
if multiple ones fail but that should be sufficient.

Signed-off-by: Heiko Carstens &lt;heiko.carstens@de.ibm.com&gt;
Signed-off-by: Rusty Russell &lt;rusty@rustcorp.com.au&gt;
</content>
</entry>
<entry>
<title>stop_machine: use workqueues instead of kernel threads</title>
<updated>2008-10-21T23:00:26Z</updated>
<author>
<name>Heiko Carstens</name>
<email>heiko.carstens@de.ibm.com</email>
</author>
<published>2008-10-13T21:50:10Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=c9583e55fa2b08a230c549bd1e3c0bde6c50d9cc'/>
<id>urn:sha1:c9583e55fa2b08a230c549bd1e3c0bde6c50d9cc</id>
<content type='text'>
Convert stop_machine to a workqueue based approach. Instead of using kernel
threads for stop_machine we now use a an rt workqueue to synchronize all
cpus.
This has the advantage that all needed per cpu threads are already created
when stop_machine gets called. And therefore a call to stop_machine won't
fail anymore. This is needed for s390 which needs a mechanism to synchronize
all cpus without allocating any memory.
As Rusty pointed out free_module() needs a non-failing stop_machine interface
as well.

As a side effect the stop_machine code gets simplified.

Signed-off-by: Heiko Carstens &lt;heiko.carstens@de.ibm.com&gt;
Signed-off-by: Rusty Russell &lt;rusty@rustcorp.com.au&gt;
</content>
</entry>
<entry>
<title>stop_machine: remove unused variable</title>
<updated>2008-08-12T07:52:55Z</updated>
<author>
<name>Li Zefan</name>
<email>lizf@cn.fujitsu.com</email>
</author>
<published>2008-07-31T02:31:02Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=ed6d68763b8b589c0ae9d231cbd72bd01f6685c5'/>
<id>urn:sha1:ed6d68763b8b589c0ae9d231cbd72bd01f6685c5</id>
<content type='text'>
Signed-off-by: Li Zefan &lt;lizf@cn.fujitsu.com&gt;
Signed-off-by: Rusty Russell &lt;rusty@rustcorp.com.au&gt;
</content>
</entry>
<entry>
<title>stop_machine(): stop_machine_run() changed to use cpu mask</title>
<updated>2008-07-28T02:16:30Z</updated>
<author>
<name>Rusty Russell</name>
<email>rusty@rustcorp.com.au</email>
</author>
<published>2008-07-28T17:16:30Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=eeec4fad963490821348a331cca6102ae1c4a7a3'/>
<id>urn:sha1:eeec4fad963490821348a331cca6102ae1c4a7a3</id>
<content type='text'>
Instead of a "cpu" arg with magic values NR_CPUS (any cpu) and ~0 (all
cpus), pass a cpumask_t.  Allow NULL for the common case (where we
don't care which CPU the function is run on): temporary cpumask_t's
are usually considered bad for stack space.

This deprecates stop_machine_run, to be removed soon when all the
callers are dead.

Signed-off-by: Rusty Russell &lt;rusty@rustcorp.com.au&gt;
</content>
</entry>
<entry>
<title>Simplify stop_machine</title>
<updated>2008-07-28T02:16:29Z</updated>
<author>
<name>Rusty Russell</name>
<email>rusty@rustcorp.com.au</email>
</author>
<published>2008-07-28T17:16:28Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=ffdb5976c47609c862917d4c186ecbb5706d2dda'/>
<id>urn:sha1:ffdb5976c47609c862917d4c186ecbb5706d2dda</id>
<content type='text'>
stop_machine creates a kthread which creates kernel threads.  We can
create those threads directly and simplify things a little.  Some care
must be taken with CPU hotunplug, which has special needs, but that code
seems more robust than it was in the past.

Signed-off-by: Rusty Russell &lt;rusty@rustcorp.com.au&gt;
Acked-by: Christian Borntraeger &lt;borntraeger@de.ibm.com&gt;
</content>
</entry>
<entry>
<title>stop_machine: add ALL_CPUS option</title>
<updated>2008-07-28T02:16:28Z</updated>
<author>
<name>Jason Baron</name>
<email>jbaron@redhat.com</email>
</author>
<published>2008-02-28T16:33:03Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=5c2aed622571ac7c3c6ec182d6d3c318e4b45c8b'/>
<id>urn:sha1:5c2aed622571ac7c3c6ec182d6d3c318e4b45c8b</id>
<content type='text'>
-allow stop_mahcine_run() to call a function on all cpus. Calling
 stop_machine_run() with a 'ALL_CPUS' invokes this new behavior.
 stop_machine_run() proceeds as normal until the calling cpu has
 invoked 'fn'. Then, we tell all the other cpus to call 'fn'.

Signed-off-by: Jason Baron &lt;jbaron@redhat.com&gt;
Signed-off-by: Mathieu Desnoyers &lt;mathieu.desnoyers@polymtl.ca&gt;
Signed-off-by: Rusty Russell &lt;rusty@rustcorp.com.au&gt;
CC: Adrian Bunk &lt;bunk@stusta.de&gt;
CC: Andi Kleen &lt;andi@firstfloor.org&gt;
CC: Alexey Dobriyan &lt;adobriyan@gmail.com&gt;
CC: Christoph Hellwig &lt;hch@infradead.org&gt;
CC: mingo@elte.hu
CC: akpm@osdl.org
</content>
</entry>
<entry>
<title>cpumask: Replace cpumask_of_cpu with cpumask_of_cpu_ptr</title>
<updated>2008-07-18T20:02:57Z</updated>
<author>
<name>Mike Travis</name>
<email>travis@sgi.com</email>
</author>
<published>2008-07-15T21:14:30Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=65c011845316d3c1381f478ca0d8265c43b3b039'/>
<id>urn:sha1:65c011845316d3c1381f478ca0d8265c43b3b039</id>
<content type='text'>
  * This patch replaces the dangerous lvalue version of cpumask_of_cpu
    with new cpumask_of_cpu_ptr macros.  These are patterned after the
    node_to_cpumask_ptr macros.

    In general terms, if there is a cpumask_of_cpu_map[] then a pointer to
    the cpumask_of_cpu_map[cpu] entry is used.  The cpumask_of_cpu_map
    is provided when there is a large NR_CPUS count, reducing
    greatly the amount of code generated and stack space used for
    cpumask_of_cpu().  The pointer to the cpumask_t value is needed for
    calling set_cpus_allowed_ptr() to reduce the amount of stack space
    needed to pass the cpumask_t value.

    If there isn't a cpumask_of_cpu_map[], then a temporary variable is
    declared and filled in with value from cpumask_of_cpu(cpu) as well as
    a pointer variable pointing to this temporary variable.  Afterwards,
    the pointer is used to reference the cpumask value.  The compiler
    will optimize out the extra dereference through the pointer as well
    as the stack space used for the pointer, resulting in identical code.

    A good example of the orthogonal usages is in net/sunrpc/svc.c:

	case SVC_POOL_PERCPU:
	{
		unsigned int cpu = m-&gt;pool_to[pidx];
		cpumask_of_cpu_ptr(cpumask, cpu);

		*oldmask = current-&gt;cpus_allowed;
		set_cpus_allowed_ptr(current, cpumask);
		return 1;
	}
	case SVC_POOL_PERNODE:
	{
		unsigned int node = m-&gt;pool_to[pidx];
		node_to_cpumask_ptr(nodecpumask, node);

		*oldmask = current-&gt;cpus_allowed;
		set_cpus_allowed_ptr(current, nodecpumask);
		return 1;
	}

Signed-off-by: Mike Travis &lt;travis@sgi.com&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: add new API sched_setscheduler_nocheck: add a flag to control access checks</title>
<updated>2008-06-23T20:57:56Z</updated>
<author>
<name>Rusty Russell</name>
<email>rusty@rustcorp.com.au</email>
</author>
<published>2008-06-23T03:55:38Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=961ccddd59d627b89bd3dc284b6517833bbdf25d'/>
<id>urn:sha1:961ccddd59d627b89bd3dc284b6517833bbdf25d</id>
<content type='text'>
Hidehiro Kawai noticed that sched_setscheduler() can fail in
stop_machine: it calls sched_setscheduler() from insmod, which can
have CAP_SYS_MODULE without CAP_SYS_NICE.

Two cases could have failed, so are changed to sched_setscheduler_nocheck:
  kernel/softirq.c:cpu_callback()
	- CPU hotplug callback
  kernel/stop_machine.c:__stop_machine_run()
	- Called from various places, including modprobe()

Signed-off-by: Rusty Russell &lt;rusty@rustcorp.com.au&gt;
Cc: Jeremy Fitzhardinge &lt;jeremy@goop.org&gt;
Cc: Hidehiro Kawai &lt;hidehiro.kawai.ez@hitachi.com&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: linux-mm@kvack.org
Cc: sugita &lt;yumiko.sugita.yf@hitachi.com&gt;
Cc: Satoshi OSHIMA &lt;satoshi.oshima.fk@hitachi.com&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
</feed>
