<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/fs/buffer.c, branch v6.16</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v6.16</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v6.16'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2025-05-26T15:23:09Z</updated>
<entry>
<title>Merge tag 'vfs-6.16-rc1.writepage' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs</title>
<updated>2025-05-26T15:23:09Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2025-05-26T15:23:09Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=dc762851444b32057709cb40e7cdb3054e60b646'/>
<id>urn:sha1:dc762851444b32057709cb40e7cdb3054e60b646</id>
<content type='text'>
Pull final writepage conversion from Christian Brauner:
 "This converts vboxfs from -&gt;writepage() to -&gt;writepages().

  This was the last user of the -&gt;writepage() method. So remove
  -&gt;writepage() completely and all references to it"

* tag 'vfs-6.16-rc1.writepage' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs:
  fs: Remove aops-&gt;writepage
  mm: Remove swap_writepage() and shmem_writepage()
  ttm: Call shmem_writeout() from ttm_backup_backup_page()
  i915: Use writeback_iter()
  shmem: Add shmem_writeout()
  writeback: Remove writeback_use_writepage()
  migrate: Remove call to -&gt;writepage
  vboxsf: Convert to writepages
  9p: Add a migrate_folio method
</content>
</entry>
<entry>
<title>fs/buffer: optimize discard_buffer()</title>
<updated>2025-05-21T07:34:29Z</updated>
<author>
<name>Davidlohr Bueso</name>
<email>dave@stgolabs.net</email>
</author>
<published>2025-05-15T17:39:25Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=8e184bf1cd7495c63242651de6190bb1678730b0'/>
<id>urn:sha1:8e184bf1cd7495c63242651de6190bb1678730b0</id>
<content type='text'>
While invalidating, the clearing of the bits in discard_buffer()
is done in one fully ordered CAS operation. In the past this was
done via individual clear_bit(), until e7470ee89f0 (fs: buffer:
do not use unnecessary atomic operations when discarding buffers).
This implies that there were never strong ordering requirements
outside of being serialized by the buffer lock.

As such relax the ordering for archs that can benefit. Further,
the implied ordering in buffer_unlock() makes current cmpxchg
implied barrier redundant due to release semantics. And while in
theory the unlock could be part of the bulk clearing, it is
best to leave it explicit, but without the double barriers.

Signed-off-by: Davidlohr Bueso &lt;dave@stgolabs.net&gt;
Link: https://lore.kernel.org/20250515173925.147823-5-dave@stgolabs.net
Reviewed-by: Jan Kara &lt;jack@suse.cz&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>fs/buffer: remove superfluous statements</title>
<updated>2025-05-21T07:34:29Z</updated>
<author>
<name>Davidlohr Bueso</name>
<email>dave@stgolabs.net</email>
</author>
<published>2025-05-15T17:39:24Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=d11a24999605a054bef5e2ade7fedfaefce52388'/>
<id>urn:sha1:d11a24999605a054bef5e2ade7fedfaefce52388</id>
<content type='text'>
Get rid of those unnecessary return statements.

Signed-off-by: Davidlohr Bueso &lt;dave@stgolabs.net&gt;
Link: https://lore.kernel.org/20250515173925.147823-4-dave@stgolabs.net
Reviewed-by: Jan Kara &lt;jack@suse.cz&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>fs/buffer: avoid redundant lookup in getblk slowpath</title>
<updated>2025-05-21T07:34:29Z</updated>
<author>
<name>Davidlohr Bueso</name>
<email>dave@stgolabs.net</email>
</author>
<published>2025-05-15T17:39:23Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=98a6ca16333e10ce450b0ab516f4c3e5fe52ef31'/>
<id>urn:sha1:98a6ca16333e10ce450b0ab516f4c3e5fe52ef31</id>
<content type='text'>
__getblk_slow() already implies failing a first lookup
as the fastpath, so try to create the buffers immediately
and avoid the redundant lookup. This saves 5-10% of the
total cost/latency of the slowpath.

Signed-off-by: Davidlohr Bueso &lt;dave@stgolabs.net&gt;
Link: https://lore.kernel.org/20250515173925.147823-3-dave@stgolabs.net
Reviewed-by: Jan Kara &lt;jack@suse.cz&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>fs/buffer: use sleeping lookup in __getblk_slowpath()</title>
<updated>2025-05-21T07:34:28Z</updated>
<author>
<name>Davidlohr Bueso</name>
<email>dave@stgolabs.net</email>
</author>
<published>2025-05-15T17:39:22Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=fb27226c389f499d04913023fbcfb7920fb0e475'/>
<id>urn:sha1:fb27226c389f499d04913023fbcfb7920fb0e475</id>
<content type='text'>
Just as with the fast path, call the lookup variant depending
on the gfp flags.

Signed-off-by: Davidlohr Bueso &lt;dave@stgolabs.net&gt;
Link: https://lore.kernel.org/20250515173925.147823-2-dave@stgolabs.net
Reviewed-by: Jan Kara &lt;jack@suse.cz&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>fs: Remove redundant errseq_set call in mark_buffer_write_io_error.</title>
<updated>2025-05-09T10:31:57Z</updated>
<author>
<name>Jeremy Bongio</name>
<email>jbongio@google.com</email>
</author>
<published>2025-05-07T12:30:10Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=04679f3c27e132c1a2d3881de2f0c5d7128de7c1'/>
<id>urn:sha1:04679f3c27e132c1a2d3881de2f0c5d7128de7c1</id>
<content type='text'>
mark_buffer_write_io_error sets sb-&gt;s_wb_err to -EIO twice.
Once in mapping_set_error and once in errseq_set.
Only mapping_set_error checks if bh-&gt;b_assoc_map-&gt;host is NULL.

Discovered during null pointer dereference during writeback
to a failing device:

[&lt;ffffffff9a416dc8&gt;] ? mark_buffer_write_io_error+0x98/0xc0
[&lt;ffffffff9a416dbe&gt;] ? mark_buffer_write_io_error+0x8e/0xc0
[&lt;ffffffff9ad4bda0&gt;] end_buffer_async_write+0x90/0xd0
[&lt;ffffffff9ad4e3eb&gt;] end_bio_bh_io_sync+0x2b/0x40
[&lt;ffffffff9adbafe6&gt;] blk_update_request+0x1b6/0x480
[&lt;ffffffff9adbb3d8&gt;] blk_mq_end_request+0x18/0x30
[&lt;ffffffff9adbc6aa&gt;] blk_mq_dispatch_rq_list+0x4da/0x8e0
[&lt;ffffffff9adc0a68&gt;] __blk_mq_sched_dispatch_requests+0x218/0x6a0
[&lt;ffffffff9adc07fa&gt;] blk_mq_sched_dispatch_requests+0x3a/0x80
[&lt;ffffffff9adbbb98&gt;] blk_mq_run_hw_queue+0x108/0x330
[&lt;ffffffff9adbcf58&gt;] blk_mq_flush_plug_list+0x178/0x5f0
[&lt;ffffffff9adb6741&gt;] __blk_flush_plug+0x41/0x120
[&lt;ffffffff9adb6852&gt;] blk_finish_plug+0x22/0x40
[&lt;ffffffff9ad47cb0&gt;] wb_writeback+0x150/0x280
[&lt;ffffffff9ac5343f&gt;] ? set_worker_desc+0x9f/0xc0
[&lt;ffffffff9ad4676e&gt;] wb_workfn+0x24e/0x4a0

Fixes: 485e9605c0573 ("fs/buffer.c: record blockdev write errors in super_block that it backs")
Signed-off-by: Jeremy Bongio &lt;jbongio@google.com&gt;
Link: https://lore.kernel.org/20250507123010.1228243-1-jbongio@google.com
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>mm/migrate: fix sleep in atomic for large folios and buffer heads</title>
<updated>2025-04-22T16:16:08Z</updated>
<author>
<name>Davidlohr Bueso</name>
<email>dave@stgolabs.net</email>
</author>
<published>2025-04-18T01:59:21Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=2d900efff915fe24c3948d28eef9078953d87fec'/>
<id>urn:sha1:2d900efff915fe24c3948d28eef9078953d87fec</id>
<content type='text'>
The large folio + buffer head noref migration scenarios are
being naughty and blocking while holding a spinlock.

As a consequence of the pagecache lookup path taking the
folio lock this serializes against migration paths, so
they can wait for each other. For the private_lock
atomic case, a new BH_Migrate flag is introduced which
enables the lookup to bail.

This allows the critical region of the private_lock on
the migration path to be reduced to the way it was before
ebdf4de5642fb6 ("mm: migrate: fix reference  check race
between __find_get_block() and migration"), that is covering
the count checks.

The scope is always noref migration.

Reported-by: kernel test robot &lt;oliver.sang@intel.com&gt;
Reported-by: syzbot+f3c6fda1297c748a7076@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/oe-lkp/202503101536.27099c77-lkp@intel.com
Fixes: 3c20917120ce61 ("block/bdev: enable large folio support for large logical block sizes")
Reviewed-by: Jan Kara &lt;jack@suse.cz&gt;
Co-developed-by: Luis Chamberlain &lt;mcgrof@kernel.org&gt;
Signed-off-by: Davidlohr Bueso &lt;dave@stgolabs.net&gt;
Link: https://kdevops.org/ext4/v6.15-rc2.html # [0]
Link: https://lore.kernel.org/all/aAAEvcrmREWa1SKF@bombadil.infradead.org/ # [1]
Link: https://lore.kernel.org/20250418015921.132400-8-dave@stgolabs.net
Tested-by: kdevops@lists.linux.dev # [0] [1]
Reviewed-by: Luis Chamberlain &lt;mcgrof@kernel.org&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>fs/buffer: use sleeping version of __find_get_block()</title>
<updated>2025-04-22T16:16:08Z</updated>
<author>
<name>Davidlohr Bueso</name>
<email>dave@stgolabs.net</email>
</author>
<published>2025-04-18T01:59:17Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=5b67d43976828dea2394eae2556b369bb7a61f64'/>
<id>urn:sha1:5b67d43976828dea2394eae2556b369bb7a61f64</id>
<content type='text'>
Convert to the new nonatomic flavor to benefit from potential performance
benefits and adapt in the future vs migration such that semantics
are kept.

Convert write_boundary_block() which already takes the buffer
lock as well as bdev_getblk() depending on the respective gpf flags.
There are no changes in semantics.

Suggested-by: Jan Kara &lt;jack@suse.cz&gt;
Reviewed-by: Jan Kara &lt;jack@suse.cz&gt;
Signed-off-by: Davidlohr Bueso &lt;dave@stgolabs.net&gt;
Link: https://kdevops.org/ext4/v6.15-rc2.html # [0]
Link: https://lore.kernel.org/all/aAAEvcrmREWa1SKF@bombadil.infradead.org/ # [1]
Link: https://lore.kernel.org/20250418015921.132400-4-dave@stgolabs.net
Tested-by: kdevops@lists.linux.dev # [0] [1]
Reviewed-by: Luis Chamberlain &lt;mcgrof@kernel.org&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>fs/buffer: introduce sleeping flavors for pagecache lookups</title>
<updated>2025-04-22T16:16:08Z</updated>
<author>
<name>Davidlohr Bueso</name>
<email>dave@stgolabs.net</email>
</author>
<published>2025-04-18T01:59:16Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=2814a7d3d2ff5d2cdd22936f641f758fdb971fa0'/>
<id>urn:sha1:2814a7d3d2ff5d2cdd22936f641f758fdb971fa0</id>
<content type='text'>
Add __find_get_block_nonatomic() and sb_find_get_block_nonatomic()
calls for which users will be converted where safe. These versions
will take the folio lock instead of the mapping's private_lock.

Reviewed-by: Jan Kara &lt;jack@suse.cz&gt;
Signed-off-by: Davidlohr Bueso &lt;dave@stgolabs.net&gt;
Link: https://kdevops.org/ext4/v6.15-rc2.html # [0]
Link: https://lore.kernel.org/all/aAAEvcrmREWa1SKF@bombadil.infradead.org/ # [1]
Link: https://lore.kernel.org/20250418015921.132400-3-dave@stgolabs.net
Tested-by: kdevops@lists.linux.dev
Reviewed-by: Luis Chamberlain &lt;mcgrof@kernel.org&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>fs/buffer: split locking for pagecache lookups</title>
<updated>2025-04-22T16:16:07Z</updated>
<author>
<name>Davidlohr Bueso</name>
<email>dave@stgolabs.net</email>
</author>
<published>2025-04-18T01:59:15Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=7ffe3de53a885dbb5836541c2178bd07d1bad7df'/>
<id>urn:sha1:7ffe3de53a885dbb5836541c2178bd07d1bad7df</id>
<content type='text'>
Callers of __find_get_block() may or may not allow for blocking
semantics, and is currently assumed that it will not. Layout
two paths based on this. The the private_lock scheme will
continued to be used for atomic contexts. Otherwise take the
folio lock instead, which protects the buffers, such as
vs migration and try_to_free_buffers().

Per the "hack idea", the latter can alleviate contention on
the private_lock for bdev mappings. For reasons of determinism
and avoid making bugs hard to reproduce, the trylocking is not
attempted.

No change in semantics. All lookup users still take the spinlock.

Reviewed-by: Jan Kara &lt;jack@suse.cz&gt;
Signed-off-by: Davidlohr Bueso &lt;dave@stgolabs.net&gt;
Link: https://kdevops.org/ext4/v6.15-rc2.html # [0]
Link: https://lore.kernel.org/all/aAAEvcrmREWa1SKF@bombadil.infradead.org/ # [1]
Link: https://lore.kernel.org/20250418015921.132400-2-dave@stgolabs.net
Tested-by: kdevops@lists.linux.dev
Reviewed-by: Luis Chamberlain &lt;mcgrof@kernel.org&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
</feed>
