<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/block, branch v2.6.39</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v2.6.39</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v2.6.39'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2011-05-18T10:24:03Z</updated>
<entry>
<title>block: don't delay blk_run_queue_async</title>
<updated>2011-05-18T10:24:03Z</updated>
<author>
<name>Shaohua Li</name>
<email>shaohua.li@intel.com</email>
</author>
<published>2011-05-18T09:22:43Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=3ec717b7ca4ee1d75d77e4f6286430d8f01d1dbd'/>
<id>urn:sha1:3ec717b7ca4ee1d75d77e4f6286430d8f01d1dbd</id>
<content type='text'>
Let's check a scenario:
1. blk_delay_queue(q, SCSI_QUEUE_DELAY);
2. blk_run_queue_async();
the second one will became a noop, because q-&gt;delay_work already has
WORK_STRUCT_PENDING_BIT set, so the delayed work will still run after
SCSI_QUEUE_DELAY. But blk_run_queue_async actually hopes the delayed
work runs immediately.

Fix this by doing a cancel on potentially pending delayed work
before queuing an immediate run of the workqueue.

Signed-off-by: Shaohua Li &lt;shaohua.li@intel.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
</entry>
<entry>
<title>blk-throttle: Use task_subsys_state() to determine a task's blkio_cgroup</title>
<updated>2011-05-16T13:24:08Z</updated>
<author>
<name>Vivek Goyal</name>
<email>vgoyal@redhat.com</email>
</author>
<published>2011-05-16T13:24:08Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=70087dc38cc77ca8f46059564c00338777734762'/>
<id>urn:sha1:70087dc38cc77ca8f46059564c00338777734762</id>
<content type='text'>
Currentlly we first map the task to cgroup and then cgroup to
blkio_cgroup. There is a more direct way to get to blkio_cgroup
from task using task_subsys_state(). Use that.

The real reason for the fix is that it also avoids a race in generic
cgroup code. During remount/umount rebind_subsystems() is called and
it can do following with and rcu protection.

cgrp-&gt;subsys[i] = NULL;

That means if somebody got hold of cgroup under rcu and then it tried
to do cgroup-&gt;subsys[] to get to blkio_cgroup, it would get NULL which
is wrong. I was running into this race condition with ltp running on a
upstream derived kernel and that lead to crash.

So ideally we should also fix cgroup generic code to wait for rcu
grace period before setting pointer to NULL. Li Zefan is not very keen
on introducing synchronize_wait() as he thinks it will slow
down moun/remount/umount operations.

So for the time being atleast fix the kernel crash by taking a more
direct route to blkio_cgroup.

One tester had reported a crash while running LTP on a derived kernel
and with this fix crash is no more seen while the test has been
running for over 6 days.

Signed-off-by: Vivek Goyal &lt;vgoyal@redhat.com&gt;
Reviewed-by: Li Zefan &lt;lizf@cn.fujitsu.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
</entry>
<entry>
<title>block: don't propagate unlisted DISK_EVENTs to userland</title>
<updated>2011-04-21T17:43:58Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2011-04-21T17:43:58Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=7c88a168da8003fd4d8fb6ae103c4ecf29cb1130'/>
<id>urn:sha1:7c88a168da8003fd4d8fb6ae103c4ecf29cb1130</id>
<content type='text'>
DISK_EVENT_MEDIA_CHANGE is used for both userland visible event and
internal event for revalidation of removeable devices.  Some legacy
drivers don't implement proper event detection and continuously
generate events under certain circumstances.  For example, ide-cd
generates media changed continuously if there's no media in the drive,
which can lead to infinite loop of events jumping back and forth
between the driver and userland event handler.

This patch updates disk event infrastructure such that it never
propagates events not listed in disk-&gt;events to userland.  Those
events are processed the same for internal purposes but uevent
generation is suppressed.

This also ensures that userland only gets events which are advertised
in the @events sysfs node lowering risk of confusion.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
</entry>
<entry>
<title>elevator: check for ELEVATOR_INSERT_SORT_MERGE in !elvpriv case too</title>
<updated>2011-04-21T17:28:35Z</updated>
<author>
<name>Jens Axboe</name>
<email>jaxboe@fusionio.com</email>
</author>
<published>2011-04-21T17:28:35Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=3aa72873ffdcc2f7919743efbbefc351ec73f5cb'/>
<id>urn:sha1:3aa72873ffdcc2f7919743efbbefc351ec73f5cb</id>
<content type='text'>
The sort insert is the one that goes to the IO scheduler. With
the SORT_MERGE addition, we could bypass IO scheduler setup
but still ask the IO scheduler to insert the request. This would
cause an oops on switching IO schedulers through the sysfs
interface, unless the disk just happened to be idle while it
occured.

Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
</entry>
<entry>
<title>block: Remove the extra check in queue_requests_store</title>
<updated>2011-04-19T11:51:53Z</updated>
<author>
<name>Tao Ma</name>
<email>boyu.mt@taobao.com</email>
</author>
<published>2011-04-19T11:50:40Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=60735b6362f29b52b5635a2dfa9ab5ad39948345'/>
<id>urn:sha1:60735b6362f29b52b5635a2dfa9ab5ad39948345</id>
<content type='text'>
In queue_requests_store, the code looks like
	if (rl-&gt;count[BLK_RW_SYNC] &gt;= q-&gt;nr_requests) {
		blk_set_queue_full(q, BLK_RW_SYNC);
	} else if (rl-&gt;count[BLK_RW_SYNC]+1 &lt;= q-&gt;nr_requests) {
		blk_clear_queue_full(q, BLK_RW_SYNC);
		wake_up(&amp;rl-&gt;wait[BLK_RW_SYNC]);
	}
If we don't satify the situation of "if", we can get that
rl-&gt;count[BLK_RW_SYNC} &lt; q-&gt;nr_quests. It is the same as
rl-&gt;count[BLK_RW_SYNC]+1 &lt;= q-&gt;nr_requests.
All the "else" should satisfy the "else if" check so it isn't
needed actually.

Signed-off-by: Tao Ma &lt;boyu.mt@taobao.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
</entry>
<entry>
<title>block, blk-sysfs: Fix an err return path in blk_register_queue()</title>
<updated>2011-04-19T11:51:53Z</updated>
<author>
<name>Liu Yuan</name>
<email>tailai.ly@taobao.com</email>
</author>
<published>2011-04-19T11:47:58Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=ed5302d3c25006a9edc7a7fbea97a30483f89ef7'/>
<id>urn:sha1:ed5302d3c25006a9edc7a7fbea97a30483f89ef7</id>
<content type='text'>
We do not call blk_trace_remove_sysfs() in err return path
if kobject_add() fails. This path fixes it.

Cc: stable@kernel.org
Signed-off-by: Liu Yuan &lt;tailai.ly@taobao.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
</entry>
<entry>
<title>block: remove stale kerneldoc member from __blk_run_queue()</title>
<updated>2011-04-19T11:34:14Z</updated>
<author>
<name>Jens Axboe</name>
<email>jaxboe@fusionio.com</email>
</author>
<published>2011-04-19T11:34:14Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=d350e6b6e819df0a383ff34465720bfaa0f91c79'/>
<id>urn:sha1:d350e6b6e819df0a383ff34465720bfaa0f91c79</id>
<content type='text'>
We don't pass in a 'force_kblockd' anymore, get rid of the
stsale comment.

Reported-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
</entry>
<entry>
<title>block: get rid of QUEUE_FLAG_REENTER</title>
<updated>2011-04-19T11:32:46Z</updated>
<author>
<name>Jens Axboe</name>
<email>jaxboe@fusionio.com</email>
</author>
<published>2011-04-19T11:32:46Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=c21e6beba8835d09bb80e34961430b13e60381c5'/>
<id>urn:sha1:c21e6beba8835d09bb80e34961430b13e60381c5</id>
<content type='text'>
We are currently using this flag to check whether it's safe
to call into -&gt;request_fn(). If it is set, we punt to kblockd.
But we get a lot of false positives and excessive punts to
kblockd, which hurts performance.

The only real abuser of this infrastructure is SCSI. So export
the async queue run and convert SCSI over to use that. There's
room for improvement in that SCSI need not always use the async
call, but this fixes our performance issue and they can fix that
up in due time.

Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
</entry>
<entry>
<title>cfq-iosched: read_lock() does not always imply rcu_read_lock()</title>
<updated>2011-04-19T07:10:35Z</updated>
<author>
<name>Jens Axboe</name>
<email>jaxboe@fusionio.com</email>
</author>
<published>2011-04-19T07:10:35Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=5f45c69589b7d2953584e6cd0b31e35dbe960ad0'/>
<id>urn:sha1:5f45c69589b7d2953584e6cd0b31e35dbe960ad0</id>
<content type='text'>
For some configurations of CONFIG_PREEMPT that is not true. So
get rid of __call_for_each_cic() and always uses the explicitly
rcu_read_lock() protected call_for_each_cic() instead.

This fixes a potential bug related to IO scheduler removal or
online switching.

Thanks to Paul McKenney for clarifying this.

Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
</entry>
<entry>
<title>block: kill blk_flush_plug_list() export</title>
<updated>2011-04-18T20:06:57Z</updated>
<author>
<name>Jens Axboe</name>
<email>jaxboe@fusionio.com</email>
</author>
<published>2011-04-18T20:06:57Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=bd900d4580107c899d43b262fbbd995f11097a43'/>
<id>urn:sha1:bd900d4580107c899d43b262fbbd995f11097a43</id>
<content type='text'>
With all drivers and file systems converted, we only have
in-core use of this function. So remove the export.

Reporteed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
</entry>
</feed>
