diff options
| author | Filipe Manana <fdmanana@suse.com> | 2026-03-10 15:29:33 +0000 |
|---|---|---|
| committer | David Sterba <dsterba@suse.com> | 2026-04-07 18:56:02 +0200 |
| commit | aa40d5601e66d873d3095e07037fc070da16aab5 (patch) | |
| tree | 5de3e56d24bcc4c136ac22fd669f8ca60b558d9c | |
| parent | 908ab5634751c4168e864d56a5270e251ce89ee3 (diff) | |
| download | linux-aa40d5601e66d873d3095e07037fc070da16aab5.tar.gz linux-aa40d5601e66d873d3095e07037fc070da16aab5.zip | |
btrfs: optimize clearing all bits from the last extent record in an io tree
When we are clearing all the bits from the last record that contains the
target range (i.e. the record starts before our target range and ends
beyond it), we are doing a lot of unnecessary work:
1) Allocating a prealloc state if we don't have one already;
2) Adjust that last record's start offset to the end of our range and
make the prealloc state have a range going from the original start
offset of that last record to the end offset of our target range and
the same bits as the last record. Then we insert the prealloc extent
in the rbtree - this is done in split_state();
3) Remove our prealloc state from the rbtree since all the bits were
cleared - this is done in clear_state_bit().
This is only wasting time when we can simply trim the last record so
that it's start offset is adjust to the end of the target range. So
optimize for that case and avoid the prealloc state allocation, insertion
and deletion from the rbtree.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
| -rw-r--r-- | fs/btrfs/extent-io-tree.c | 39 |
1 files changed, 39 insertions, 0 deletions
diff --git a/fs/btrfs/extent-io-tree.c b/fs/btrfs/extent-io-tree.c index d0dd50f7d279..4ba916cb27ac 100644 --- a/fs/btrfs/extent-io-tree.c +++ b/fs/btrfs/extent-io-tree.c @@ -724,6 +724,45 @@ hit_next: * We need to split the extent, and clear the bit on the first half. */ if (state->start <= end && state->end > end) { + const u32 bits_to_clear = bits & ~EXTENT_CTLBITS; + + /* + * If all bits are cleared, there's no point in allocating or + * using the prealloc extent, split the state record, insert the + * prealloc record and then remove it. We can just adjust the + * start offset of the current state and avoid all that. + */ + if ((state->state & ~bits_to_clear) == 0) { + const u64 orig_end = state->end; + + if (tree->owner == IO_TREE_INODE_IO) + btrfs_split_delalloc_extent(tree->inode, state, end + 1); + + /* + * Temporarily adjust the end offset to match the + * removed subrange to update the changeset. + */ + state->end = end; + + ret = add_extent_changeset(state, bits_to_clear, changeset, 0); + if (unlikely(ret < 0)) { + extent_io_tree_panic(tree, state, + "add_extent_changeset", ret); + goto out; + } + ret = 0; + + if (tree->owner == IO_TREE_INODE_IO) + btrfs_clear_delalloc_extent(tree->inode, state, bits); + + state->start = end + 1; + state->end = orig_end; + + if (wake) + wake_up(&state->wq); + goto out; + } + prealloc = alloc_extent_state_atomic(prealloc); if (!prealloc) goto search_again; |
