<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/lib/xarray.c, branch v6.16</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v6.16</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v6.16'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2025-05-13T06:50:49Z</updated>
<entry>
<title>xarray: fix kerneldoc for __xa_cmpxchg</title>
<updated>2025-05-13T06:50:49Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2025-05-07T05:16:20Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=f1c2bca2677b78c66edd1db010fd79d4a08e7146'/>
<id>urn:sha1:f1c2bca2677b78c66edd1db010fd79d4a08e7146</id>
<content type='text'>
Fix the documentation for __xa_cmpxchg to actually describe the
cmpxch-like semantics correctly, based on the version for xa_cmpxchg.

Link: https://lkml.kernel.org/r/20250507051656.3900864-1-hch@lst.de
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Cc: Matthew Wilcox (Oracle) &lt;willy@infradead.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm/filemap: use xas_try_split() in __filemap_add_folio()</title>
<updated>2025-03-18T05:07:01Z</updated>
<author>
<name>Zi Yan</name>
<email>ziy@nvidia.com</email>
</author>
<published>2025-03-14T22:21:12Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=200a89c159a7a416115e6e309183c82183bf98aa'/>
<id>urn:sha1:200a89c159a7a416115e6e309183c82183bf98aa</id>
<content type='text'>
Patch series "Minimize xa_node allocation during xarry split", v3.

When splitting a multi-index entry in XArray from order-n to order-m,
existing xas_split_alloc()+xas_split() approach requires 2^(n %
XA_CHUNK_SHIFT) xa_node allocations.  But its callers,
__filemap_add_folio() and shmem_split_large_entry(), use at most 1
xa_node.  To minimize xa_node allocation and remove the limitation of no
split from order-12 (or above) to order-0 (or anything between 0 and
5)[1], xas_try_split() was added[2], which allocates (n / XA_CHUNK_SHIFT -
m / XA_CHUNK_SHIFT) xa_node.  It is used for non-uniform folio split, but
can be used by __filemap_add_folio() and shmem_split_large_entry().

xas_split_alloc() and xas_split() split an order-9 to order-0:

         ---------------------------------
         |   |   |   |   |   |   |   |   |
         | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
         |   |   |   |   |   |   |   |   |
         ---------------------------------
           |   |                   |   |
     -------   ---               ---   -------
     |           |     ...       |           |
     V           V               V           V
----------- -----------     ----------- -----------
| xa_node | | xa_node | ... | xa_node | | xa_node |
----------- -----------     ----------- -----------

xas_try_split() splits an order-9 to order-0:
   ---------------------------------
   |   |   |   |   |   |   |   |   |
   | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
   |   |   |   |   |   |   |   |   |
   ---------------------------------
     |
     |
     V
-----------
| xa_node |
-----------

xas_try_split() is designed to be called iteratively with n = m + 1. 
xas_try_split_mini_order() is added to minmize the number of calls to
xas_try_split() by telling the caller the next minimal order to split to
instead of n - 1.  Splitting order-n to order-m when m= l * XA_CHUNK_SHIFT
does not require xa_node allocation and requires 1 xa_node when n=l *
XA_CHUNK_SHIFT and m = n - 1, so it is OK to use xas_try_split() with n &gt;
m + 1 when no new xa_node is needed.

xfstests quick group test passed on xfs and tmpfs.

[1] https://lore.kernel.org/linux-mm/Z6YX3RznGLUD07Ao@casper.infradead.org/
[2] https://lore.kernel.org/linux-mm/20250226210032.2044041-1-ziy@nvidia.com/


This patch (of 2):

During __filemap_add_folio(), a shadow entry is covering n slots and a
folio covers m slots with m &lt; n is to be added.  Instead of splitting all
n slots, only the m slots covered by the folio need to be split and the
remaining n-m shadow entries can be retained with orders ranging from m to
n-1.  This method only requires

	(n/XA_CHUNK_SHIFT) - (m/XA_CHUNK_SHIFT)

new xa_nodes instead of

	(n % XA_CHUNK_SHIFT) * ((n/XA_CHUNK_SHIFT) - (m/XA_CHUNK_SHIFT))

new xa_nodes, compared to the original xas_split_alloc() + xas_split()
one.  For example, to insert an order-0 folio when an order-9 shadow entry
is present (assuming XA_CHUNK_SHIFT is 6), 1 xa_node is needed instead of
8.

xas_try_split_min_order() is introduced to reduce the number of calls to
xas_try_split() during split.

Link: https://lkml.kernel.org/r/20250314222113.711703-1-ziy@nvidia.com
Link: https://lkml.kernel.org/r/20250314222113.711703-2-ziy@nvidia.com
Signed-off-by: Zi Yan &lt;ziy@nvidia.com&gt;
Cc: Baolin Wang &lt;baolin.wang@linux.alibaba.com&gt;
Cc: Hugh Dickins &lt;hughd@google.com&gt;
Cc: Kairui Song &lt;kasong@tencent.com&gt;
Cc: Miaohe Lin &lt;linmiaohe@huawei.com&gt;
Cc: Mattew Wilcox &lt;willy@infradead.org&gt;
Cc: David Hildenbrand &lt;david@redhat.com&gt;
Cc: John Hubbard &lt;jhubbard@nvidia.com&gt;
Cc: Kefeng Wang &lt;wangkefeng.wang@huawei.com&gt;
Cc: Kirill A. Shuemov &lt;kirill.shutemov@linux.intel.com&gt;
Cc: Ryan Roberts &lt;ryan.roberts@arm.com&gt;
Cc: Yang Shi &lt;yang@os.amperecomputing.com&gt;
Cc: Yu Zhao &lt;yuzhao@google.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>xarray: add xas_try_split() to split a multi-index entry</title>
<updated>2025-03-18T05:06:59Z</updated>
<author>
<name>Zi Yan</name>
<email>ziy@nvidia.com</email>
</author>
<published>2025-03-07T17:39:54Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=3fec86f8aa8c7ae84567a0d1396e84ada96141d8'/>
<id>urn:sha1:3fec86f8aa8c7ae84567a0d1396e84ada96141d8</id>
<content type='text'>
Patch series "Buddy allocator like (or non-uniform) folio split", v10.

This patchset adds a new buddy allocator like (or non-uniform) large folio
split from a order-n folio to order-m with m &lt; n.  It reduces

1. the total number of after-split folios from 2^(n-m) to n-m+1;

2. the amount of memory needed for multi-index xarray split from 2^(n/6-m/6) to
   n/6-m/6, assuming XA_CHUNK_SHIFT=6;

3. keep more large folios after a split from all order-m folios to
   order-(n-1) to order-m folios.

For example, to split an order-9 to order-0, folio split generates 10 (or
11 for anonymous memory) folios instead of 512, allocates 1 xa_node
instead of 8, and leaves 1 order-8, 1 order-7, ..., 1 order-1 and 2
order-0 folios (or 4 order-0 for anonymous memory) instead of 512 order-0
folios.

Instead of duplicating existing split_huge_page*() code, __folio_split()
is introduced as the shared backend code for both
split_huge_page_to_list_to_order() and folio_split().  __folio_split() can
support both uniform split and buddy allocator like (or non-uniform)
split.  All existing split_huge_page*() users can be gradually converted
to use folio_split() if possible.  In this patchset, I converted
truncate_inode_partial_folio() to use folio_split().

xfstests quick group passed for both tmpfs and xfs.  I also
semi-replicated Hugh's test[12] and ran it without any issue for almost 24
hours.


This patch (of 8):

A preparation patch for non-uniform folio split, which always split a
folio into half iteratively, and minimal xarray entry split.

Currently, xas_split_alloc() and xas_split() always split all slots from a
multi-index entry.  They cost the same number of xa_node as the
to-be-split slots.  For example, to split an order-9 entry, which takes
2^(9-6)=8 slots, assuming XA_CHUNK_SHIFT is 6 (!CONFIG_BASE_SMALL), 8
xa_node are needed.  Instead xas_try_split() is intended to be used
iteratively to split the order-9 entry into 2 order-8 entries, then split
one order-8 entry, based on the given index, to 2 order-7 entries, ...,
and split one order-1 entry to 2 order-0 entries.  When splitting the
order-6 entry and a new xa_node is needed, xas_try_split() will try to
allocate one if possible.  As a result, xas_try_split() would only need 1
xa_node instead of 8.

When a new xa_node is needed during the split, xas_try_split() can try to
allocate one but no more.  -ENOMEM will be return if a node cannot be
allocated.  -EINVAL will be return if a sibling node is split or cascade
split happens, where two or more new nodes are needed, and these are not
supported by xas_try_split().

xas_split_alloc() and xas_split() split an order-9 to order-0:

         ---------------------------------
         |   |   |   |   |   |   |   |   |
         | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
         |   |   |   |   |   |   |   |   |
         ---------------------------------
           |   |                   |   |
     -------   ---               ---   -------
     |           |     ...       |           |
     V           V               V           V
----------- -----------     ----------- -----------
| xa_node | | xa_node | ... | xa_node | | xa_node |
----------- -----------     ----------- -----------

xas_try_split() splits an order-9 to order-0:
   ---------------------------------
   |   |   |   |   |   |   |   |   |
   | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
   |   |   |   |   |   |   |   |   |
   ---------------------------------
     |
     |
     V
-----------
| xa_node |
-----------

Link: https://lkml.kernel.org/r/20250307174001.242794-1-ziy@nvidia.com
Link: https://lkml.kernel.org/r/20250307174001.242794-2-ziy@nvidia.com
Signed-off-by: Zi Yan &lt;ziy@nvidia.com&gt;
Cc: Baolin Wang &lt;baolin.wang@linux.alibaba.com&gt;
Cc: David Hildenbrand &lt;david@redhat.com&gt;
Cc: Hugh Dickins &lt;hughd@google.com&gt;
Cc: John Hubbard &lt;jhubbard@nvidia.com&gt;
Cc: Kefeng Wang &lt;wangkefeng.wang@huawei.com&gt;
Cc: Kirill A. Shuemov &lt;kirill.shutemov@linux.intel.com&gt;
Cc: Miaohe Lin &lt;linmiaohe@huawei.com&gt;
Cc: Matthew Wilcox &lt;willy@infradead.org&gt;
Cc: Ryan Roberts &lt;ryan.roberts@arm.com&gt;
Cc: Yang Shi &lt;yang@os.amperecomputing.com&gt;
Cc: Yu Zhao &lt;yuzhao@google.com&gt;
Cc: Zi Yan &lt;ziy@nvidia.com&gt;
Cc: Kairui Song &lt;kasong@tencent.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>Xarray: use xa_mark_t in xas_squash_marks() to keep code consistent</title>
<updated>2025-01-25T06:47:28Z</updated>
<author>
<name>Kemeng Shi</name>
<email>shikemeng@huaweicloud.com</email>
</author>
<published>2024-12-13T12:25:23Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=13fd5cf37456620b57106dbfa89c5aad5f195343'/>
<id>urn:sha1:13fd5cf37456620b57106dbfa89c5aad5f195343</id>
<content type='text'>
Besides xas_squash_marks(), all functions use xa_mark_t type to iterate
all possible marks.  Use xa_mark_t in xas_squash_marks() to keep code
consistent.

Link: https://lkml.kernel.org/r/20241213122523.12764-6-shikemeng@huaweicloud.com
Signed-off-by: Kemeng Shi &lt;shikemeng@huaweicloud.com&gt;
Cc: Mattew Wilcox &lt;willy@infradead.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>Xarray: remove repeat check in xas_squash_marks()</title>
<updated>2025-01-25T06:47:27Z</updated>
<author>
<name>Kemeng Shi</name>
<email>shikemeng@huaweicloud.com</email>
</author>
<published>2024-12-13T12:25:22Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=1988b318b32d3a0260e05bf6643990230df35843'/>
<id>urn:sha1:1988b318b32d3a0260e05bf6643990230df35843</id>
<content type='text'>
Caller of xas_squash_marks() has ensured xas-&gt;xa_sibs is non-zero.  Just
remove repeat check of xas-&gt;xa_sibs in xas_squash_marks().

Link: https://lkml.kernel.org/r/20241213122523.12764-5-shikemeng@huaweicloud.com
Signed-off-by: Kemeng Shi &lt;shikemeng@huaweicloud.com&gt;
Cc: Mattew Wilcox &lt;willy@infradead.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>Xarray: distinguish large entries correctly in xas_split_alloc()</title>
<updated>2025-01-25T06:47:27Z</updated>
<author>
<name>Kemeng Shi</name>
<email>shikemeng@huaweicloud.com</email>
</author>
<published>2024-12-13T12:25:21Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=97db889b961ef3f849813de34bd3ea5715813ed0'/>
<id>urn:sha1:97db889b961ef3f849813de34bd3ea5715813ed0</id>
<content type='text'>
We don't support large entries which expand two more level xa_node in
split.  For case "xas-&gt;xa_shift + 2 * XA_CHUNK_SHIFT == order", we also
need two level of xa_node to expand.  Distinguish entry as large entry in
case "xas-&gt;xa_shift + 2 * XA_CHUNK_SHIFT == order".

As max order of folio in pagecache (MAX_PAGECACHE_ORDER) is &lt;=
(XA_CHUNK_SHIFT * 2 - 1), this change is more likely a cleanup...

Link: https://lkml.kernel.org/r/20241213122523.12764-4-shikemeng@huaweicloud.com
Signed-off-by: Kemeng Shi &lt;shikemeng@huaweicloud.com&gt;
Cc: Mattew Wilcox &lt;willy@infradead.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>Xarray: move forward index correctly in xas_pause()</title>
<updated>2025-01-25T06:47:27Z</updated>
<author>
<name>Kemeng Shi</name>
<email>shikemeng@huaweicloud.com</email>
</author>
<published>2024-12-13T12:25:20Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=c9ba5249ef8b080c6779d3da14a061f6d4d6d5fa'/>
<id>urn:sha1:c9ba5249ef8b080c6779d3da14a061f6d4d6d5fa</id>
<content type='text'>
After xas_load(), xas-&gt;index could point to mid of found multi-index entry
and xas-&gt;index's bits under node-&gt;shift maybe non-zero.  The afterward
xas_pause() will move forward xas-&gt;index with xa-&gt;node-&gt;shift with bits
under node-&gt;shift un-masked and thus skip some index unexpectedly.

Consider following case:
Assume XA_CHUNK_SHIFT is 4.
xa_store_range(xa, 16, 31, ...)
xa_store(xa, 32, ...)
XA_STATE(xas, xa, 17);
xas_for_each(&amp;xas,...)
xas_load(&amp;xas)
/* xas-&gt;index = 17, xas-&gt;xa_offset = 1, xas-&gt;xa_node-&gt;xa_shift = 4 */
xas_pause()
/* xas-&gt;index = 33, xas-&gt;xa_offset = 2, xas-&gt;xa_node-&gt;xa_shift = 4 */
As we can see, index of 32 is skipped unexpectedly.

Fix this by mask bit under node-&gt;xa_shift when move forward index in
xas_pause().

For now, this will not cause serious problems.  Only minor problem like
cachestat return less number of page status could happen.

Link: https://lkml.kernel.org/r/20241213122523.12764-3-shikemeng@huaweicloud.com
Signed-off-by: Kemeng Shi &lt;shikemeng@huaweicloud.com&gt;
Cc: Mattew Wilcox &lt;willy@infradead.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>Xarray: do not return sibling entries from xas_find_marked()</title>
<updated>2025-01-25T06:47:27Z</updated>
<author>
<name>Kemeng Shi</name>
<email>shikemeng@huaweicloud.com</email>
</author>
<published>2024-12-13T12:25:19Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=7e060df04f562b37bdc101fd06b16f013e2d989b'/>
<id>urn:sha1:7e060df04f562b37bdc101fd06b16f013e2d989b</id>
<content type='text'>
Patch series "Fixes and cleanups to xarray", v5.

This series contains some random fixes and cleanups to xarray.  Patch 1-2
are fixes and patch 3-6 are cleanups.  More details can be found in
respective patches.


This patch (of 5):

Similar to issue fixed in commit cbc02854331ed ("XArray: Do not return
sibling entries from xa_load()"), we may return sibling entries from
xas_find_marked as following:
    Thread A:               Thread B:
                            xa_store_range(xa, entry, 6, 7, gfp);
			    xa_set_mark(xa, 6, mark)
    XA_STATE(xas, xa, 6);
    xas_find_marked(&amp;xas, 7, mark);
    offset = xas_find_chunk(xas, advance, mark);
    [offset is 6 which points to a valid entry]
                            xa_store_range(xa, entry, 4, 7, gfp);
    entry = xa_entry(xa, node, 6);
    [entry is a sibling of 4]
    if (!xa_is_node(entry))
        return entry;

Skip sibling entry like xas_find() does to protect caller from seeing
sibling entry from xas_find_marked() or caller may use sibling entry
as a valid entry and crash the kernel.

Besides, load_race() test is modified to catch mentioned issue and modified
load_race() only passes after this fix is merged.

Here is an example how this bug could be triggerred in tmpfs which
enables large folio in mapping:
Let's take a look at involved racer:
1. How pages could be created and dirtied in shmem file.
write
 ksys_write
  vfs_write
   new_sync_write
    shmem_file_write_iter
     generic_perform_write
      shmem_write_begin
       shmem_get_folio
        shmem_allowable_huge_orders
        shmem_alloc_and_add_folios
        shmem_alloc_folio
        __folio_set_locked
        shmem_add_to_page_cache
         XA_STATE_ORDER(..., index, order)
         xax_store()
      shmem_write_end
       folio_mark_dirty()

2. How dirty pages could be deleted in shmem file.
ioctl
 do_vfs_ioctl
  file_ioctl
   ioctl_preallocate
    vfs_fallocate
     shmem_fallocate
      shmem_truncate_range
       shmem_undo_range
        truncate_inode_folio
         filemap_remove_folio
          page_cache_delete
           xas_store(&amp;xas, NULL);

3. How dirty pages could be lockless searched
sync_file_range
 ksys_sync_file_range
  __filemap_fdatawrite_range
   filemap_fdatawrite_wbc
    do_writepages
     writeback_use_writepage
      writeback_iter
       writeback_get_folio
        filemap_get_folios_tag
         find_get_entry
          folio = xas_find_marked()
          folio_try_get(folio)

Kernel will crash as following:
1.Create               2.Search             3.Delete
/* write page 2,3 */
write
 ...
  shmem_write_begin
   XA_STATE_ORDER(xas, i_pages, index = 2, order = 1)
   xa_store(&amp;xas, folio)
  shmem_write_end
   folio_mark_dirty()

                       /* sync page 2 and page 3 */
                       sync_file_range
                        ...
                         find_get_entry
                          folio = xas_find_marked()
                          /* offset will be 2 */
                          offset = xas_find_chunk()

                                             /* delete page 2 and page 3 */
                                             ioctl
                                              ...
                                               xas_store(&amp;xas, NULL);

/* write page 0-3 */
write
 ...
  shmem_write_begin
   XA_STATE_ORDER(xas, i_pages, index = 0, order = 2)
   xa_store(&amp;xas, folio)
  shmem_write_end
   folio_mark_dirty(folio)

                          /* get sibling entry from offset 2 */
                          entry = xa_entry(.., 2)
                          /* use sibling entry as folio and crash kernel */
                          folio_try_get(folio)

Link: https://lkml.kernel.org/r/20241213122523.12764-1-shikemeng@huaweicloud.com
Link: https://lkml.kernel.org/r/20241213122523.12764-2-shikemeng@huaweicloud.com
Signed-off-by: Kemeng Shi &lt;shikemeng@huaweicloud.com&gt;
Cc: Mattew Wilcox &lt;willy@infradead.org&gt; [English fixes]
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>xarray: extract helper from __xa_{insert,cmpxchg}</title>
<updated>2025-01-13T04:20:58Z</updated>
<author>
<name>Tamir Duberstein</name>
<email>tamird@gmail.com</email>
</author>
<published>2024-11-12T19:25:37Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=79ada2ae66153e285b3717ee4d4a56c8e517c1fa'/>
<id>urn:sha1:79ada2ae66153e285b3717ee4d4a56c8e517c1fa</id>
<content type='text'>
Reduce code duplication by extracting a static inline function.  This
function is identical to __xa_cmpxchg with the exception that it does not
coerce zero entries to null on the return path.

[tamird@gmail.com: fix __xa_erase()]
  Link: https://lkml.kernel.org/r/CAJ-ks9kN_qddZ3Ne5d=cADu5POC1rHd4rQcbVSD_spnZOrLLZg@mail.gmail.com
Link: https://lkml.kernel.org/r/20241112-xarray-insert-cmpxchg-v1-2-dc2bdd8c4136@gmail.com
Signed-off-by: Tamir Duberstein &lt;tamird@gmail.com&gt;
Cc: Alice Ryhl &lt;aliceryhl@google.com&gt;
Cc: Andreas Hindborg &lt;a.hindborg@kernel.org&gt;
Cc: Matthew Wilcox &lt;willy@infradead.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>xarray: extract xa_zero_to_null</title>
<updated>2025-01-13T04:20:58Z</updated>
<author>
<name>Tamir Duberstein</name>
<email>tamird@gmail.com</email>
</author>
<published>2024-11-12T19:25:36Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=74e2712b14e35a200ba24b0b99e56a2d91499ff0'/>
<id>urn:sha1:74e2712b14e35a200ba24b0b99e56a2d91499ff0</id>
<content type='text'>
Patch series "xarray: extract __xa_cmpxchg_raw".

This series reduces duplication between __xa_cmpxchg and __xa_insert by
extracting a new function that does not coerce zero entries to null on the
return path.

The new function may be used by the upcoming Rust xarray abstraction in
its reservation API where it is useful to tell the difference between zero
entries and null slots.


This patch (of 2):

Reduce code duplication by extracting a static inline function that
returns its argument if it is non-zero and NULL otherwise.

This changes xas_result to check for errors before checking for zero but
this cannot change the behavior of existing callers:
- __xa_erase: passes the result of xas_store(_, NULL) which cannot fail.
- __xa_store: passes the result of xas_store(_, entry) which may fail.
  xas_store calls xas_create when entry is not NULL which returns NULL
  on error, which is immediately checked. This should not change
  observable behavior.
- __xa_cmpxchg: passes the result of xas_load(_) which might be zero.
  This would previously return NULL regardless of the outcome of
  xas_store but xas_store cannot fail if xas_load returns zero
  because there is no need to allocate memory.
- xa_store_range: same as __xa_erase.

Link: https://lkml.kernel.org/r/20241112-xarray-insert-cmpxchg-v1-0-dc2bdd8c4136@gmail.com
Link: https://lkml.kernel.org/r/20241112-xarray-insert-cmpxchg-v1-1-dc2bdd8c4136@gmail.com
Signed-off-by: Tamir Duberstein &lt;tamird@gmail.com&gt;
Cc: Alice Ryhl &lt;aliceryhl@google.com&gt;
Cc: Andreas Hindborg &lt;a.hindborg@kernel.org&gt;
Cc: Matthew Wilcox &lt;willy@infradead.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
</feed>
