<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/drivers/md/dm-writecache.c, branch v5.7</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v5.7</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v5.7'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2020-04-16T20:04:13Z</updated>
<entry>
<title>dm writecache: fix data corruption when reloading the target</title>
<updated>2020-04-16T20:04:13Z</updated>
<author>
<name>Mikulas Patocka</name>
<email>mpatocka@redhat.com</email>
</author>
<published>2020-04-15T15:01:38Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=31b22120194b5c0d460f59e0c98504de1d3f1f14'/>
<id>urn:sha1:31b22120194b5c0d460f59e0c98504de1d3f1f14</id>
<content type='text'>
The dm-writecache reads metadata in the target constructor. However, when
we reload the target, there could be another active instance running on
the same device. This is the sequence of operations when doing a reload:

1. construct new target
2. suspend old target
3. resume new target
4. destroy old target

Metadata that were written by the old target between steps 1 and 2 would
not be visible by the new target.

Fix the data corruption by loading the metadata in the resume handler.

Also, validate block_size is at least as large as both the devices'
logical block size and only read 1 block from the metadata during
target constructor -- no need to read entirety of metadata now that it
is done during resume.

Fixes: 48debafe4f2f ("dm: add writecache target")
Cc: stable@vger.kernel.org # v4.18+
Signed-off-by: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
</content>
</entry>
<entry>
<title>dm writecache: add cond_resched to avoid CPU hangs</title>
<updated>2020-03-27T18:36:50Z</updated>
<author>
<name>Mikulas Patocka</name>
<email>mpatocka@redhat.com</email>
</author>
<published>2020-03-27T11:22:36Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=1edaa447d958bec24c6a79685a5790d98976fd16'/>
<id>urn:sha1:1edaa447d958bec24c6a79685a5790d98976fd16</id>
<content type='text'>
Initializing a dm-writecache device can take a long time when the
persistent memory device is large.  Add cond_resched() to a few loops
to avoid warnings that the CPU is stuck.

Cc: stable@vger.kernel.org # v4.18+
Signed-off-by: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
</content>
</entry>
<entry>
<title>dm writecache: optimize superblock write</title>
<updated>2020-03-24T15:55:09Z</updated>
<author>
<name>Mikulas Patocka</name>
<email>mpatocka@redhat.com</email>
</author>
<published>2020-02-24T09:20:34Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=dc8a01ae1dbd7bac98368da4d8f81632512429f5'/>
<id>urn:sha1:dc8a01ae1dbd7bac98368da4d8f81632512429f5</id>
<content type='text'>
If we write a superblock in writecache_flush, we don't need to set bit and
scan the bitmap for it - we can just write the superblock directly. Also,
we can set the flag REQ_FUA on the write bio, so that we don't need to
submit a flush bio afterwards.

Signed-off-by: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
</content>
</entry>
<entry>
<title>dm writecache: implement gradual cleanup</title>
<updated>2020-03-24T15:55:08Z</updated>
<author>
<name>Mikulas Patocka</name>
<email>mpatocka@redhat.com</email>
</author>
<published>2020-02-24T09:20:33Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=3923d4854e189d84c6ec22e66d536d3498f2747c'/>
<id>urn:sha1:3923d4854e189d84c6ec22e66d536d3498f2747c</id>
<content type='text'>
If a block is stored in the cache for too long, it will now be
written to the underlying device and cleaned up.

Add a new option "max_age" that specifies the maximum age of a block
in milliseconds.

Signed-off-by: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
</content>
</entry>
<entry>
<title>dm writecache: implement the "cleaner" policy</title>
<updated>2020-03-24T15:50:30Z</updated>
<author>
<name>Mikulas Patocka</name>
<email>mpatocka@redhat.com</email>
</author>
<published>2020-02-24T09:20:32Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=93de44eb3fc8c3566f5315b0210630cc361526a7'/>
<id>urn:sha1:93de44eb3fc8c3566f5315b0210630cc361526a7</id>
<content type='text'>
The "flush" or "flush_on_suspend" messages flush the whole cache. However,
these flushing methods can take some time and the process is left in
an interruptible state during the flush.

Implement a "cleaner" option that offers an alternate flushing method.
When this option is activated (either by a message or in the constructor
arguments), the cache will not promote new writes (however, writes to
already cached blocks are promoted, to avoid data corruption due to
misordered writes) and it will gradually writeback any cached data. The
userspace can then monitor the cleaning process with "dmsetup status".
When the number of cached bloks drops to zero, the userspace can unload
the dm-writecache target and replace it with dm-linear or other targets.

Signed-off-by: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
</content>
</entry>
<entry>
<title>dm writecache: do direct write if the cache is full</title>
<updated>2020-03-24T15:26:18Z</updated>
<author>
<name>Mikulas Patocka</name>
<email>mpatocka@redhat.com</email>
</author>
<published>2020-02-24T09:20:31Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=d53f1fafec9d086f1c5166436abefdaef30e0363'/>
<id>urn:sha1:d53f1fafec9d086f1c5166436abefdaef30e0363</id>
<content type='text'>
If the cache device is full, we do a direct write to the origin device.
Note that we must not do it if the written block is already in the cache.

Signed-off-by: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
</content>
</entry>
<entry>
<title>dm: bump version of core and various targets</title>
<updated>2020-03-03T16:10:21Z</updated>
<author>
<name>Mike Snitzer</name>
<email>snitzer@redhat.com</email>
</author>
<published>2020-02-27T19:25:31Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=636be4241bdd88fec273b38723e44bad4e1c4fae'/>
<id>urn:sha1:636be4241bdd88fec273b38723e44bad4e1c4fae</id>
<content type='text'>
Changes made during the 5.6 cycle warrant bumping the version number
for DM core and the targets modified by this commit.

It should be noted that dm-thin, dm-crypt and dm-raid already had
their target version bumped during the 5.6 merge window.

Signed-off-by; Mike Snitzer &lt;snitzer@redhat.com&gt;
</content>
</entry>
<entry>
<title>dm writecache: verify watermark during resume</title>
<updated>2020-02-27T21:44:24Z</updated>
<author>
<name>Mikulas Patocka</name>
<email>mpatocka@redhat.com</email>
</author>
<published>2020-02-24T09:20:30Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=41c526c5af46d4c4dab7f72c99000b7fac0b9702'/>
<id>urn:sha1:41c526c5af46d4c4dab7f72c99000b7fac0b9702</id>
<content type='text'>
Verify the watermark upon resume - so that if the target is reloaded
with lower watermark, it will start the cleanup process immediately.

Fixes: 48debafe4f2f ("dm: add writecache target")
Cc: stable@vger.kernel.org # 4.18+
Signed-off-by: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
</content>
</entry>
<entry>
<title>dm: report suspended device during destroy</title>
<updated>2020-02-27T21:40:58Z</updated>
<author>
<name>Mikulas Patocka</name>
<email>mpatocka@redhat.com</email>
</author>
<published>2020-02-24T09:20:28Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=adc0daad366b62ca1bce3e2958a40b0b71a8b8b3'/>
<id>urn:sha1:adc0daad366b62ca1bce3e2958a40b0b71a8b8b3</id>
<content type='text'>
The function dm_suspended returns true if the target is suspended.
However, when the target is being suspended during unload, it returns
false.

An example where this is a problem: the test "!dm_suspended(wc-&gt;ti)" in
writecache_writeback is not sufficient, because dm_suspended returns
zero while writecache_suspend is in progress.  As is, without an
enhanced dm_suspended, simply switching from flush_workqueue to
drain_workqueue still emits warnings:
workqueue writecache-writeback: drain_workqueue() isn't complete after 10 tries
workqueue writecache-writeback: drain_workqueue() isn't complete after 100 tries
workqueue writecache-writeback: drain_workqueue() isn't complete after 200 tries
workqueue writecache-writeback: drain_workqueue() isn't complete after 300 tries
workqueue writecache-writeback: drain_workqueue() isn't complete after 400 tries

writecache_suspend calls flush_workqueue(wc-&gt;writeback_wq) - this function
flushes the current work. However, the workqueue may re-queue itself and
flush_workqueue doesn't wait for re-queued works to finish. Because of
this - the function writecache_writeback continues execution after the
device was suspended and then concurrently with writecache_dtr, causing
a crash in writecache_writeback.

We must use drain_workqueue - that waits until the work and all re-queued
works finish.

As a prereq for switching to drain_workqueue, this commit fixes
dm_suspended to return true after the presuspend hook and before the
postsuspend hook - just like during a normal suspend. It allows
simplifying the dm-integrity and dm-writecache targets so that they
don't have to maintain suspended flags on their own.

With this change use of drain_workqueue() can be used effectively.  This
change was tested with the lvm2 testsuite and cryptsetup testsuite and
the are no regressions.

Fixes: 48debafe4f2f ("dm: add writecache target")
Cc: stable@vger.kernel.org # 4.18+
Reported-by: Corey Marthaler &lt;cmarthal@redhat.com&gt;
Signed-off-by: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
</content>
</entry>
<entry>
<title>dm writecache: improve performance of large linear writes on SSDs</title>
<updated>2020-01-16T18:34:17Z</updated>
<author>
<name>Mikulas Patocka</name>
<email>mpatocka@redhat.com</email>
</author>
<published>2020-01-15T09:35:22Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=dcd195071f22d4770911ca46694ca398b6d5101d'/>
<id>urn:sha1:dcd195071f22d4770911ca46694ca398b6d5101d</id>
<content type='text'>
When dm-writecache is used with SSD as a cache device, it would submit a
separate bio for each written block. The I/Os would be merged by the disk
scheduler, but this merging degrades performance.

Improve dm-writecache performance by submitting larger bios - this is
possible as long as there is consecutive free space on the cache
device.

Benchmark (arm64 with 64k page size, using /dev/ram0 as a cache device):

fio --bs=512k --iodepth=32 --size=400M --direct=1 \
    --filename=/dev/mapper/cache --rw=randwrite --numjobs=1 --name=test

block	old	new
size	MiB/s	MiB/s
---------------------
512	181	700
1k	347	1256
2k	644	2020
4k	1183	2759
8k	1852	3333
16k	2469	3509
32k	2974	3670
64k	3404	3810

Signed-off-by: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
</content>
</entry>
</feed>
