<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/dma/direct.c, branch v6.7</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v6.7</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v6.7'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2023-11-06T07:38:16Z</updated>
<entry>
<title>dma-mapping: fix dma_addressing_limited() if dma_range_map can't cover all system RAM</title>
<updated>2023-11-06T07:38:16Z</updated>
<author>
<name>Jia He</name>
<email>justin.he@arm.com</email>
</author>
<published>2023-10-28T10:20:59Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=a409d9600959f3c4b2a48946304c8e01b8d04072'/>
<id>urn:sha1:a409d9600959f3c4b2a48946304c8e01b8d04072</id>
<content type='text'>
There is an unusual case that the range map covers right up to the top
of system RAM, but leaves a hole somewhere lower down. Then it prevents
the nvme device dma mapping in the checking path of phys_to_dma() and
causes the hangs at boot.

E.g. On an Armv8 Ampere server, the dsdt ACPI table is:
 Method (_DMA, 0, Serialized)  // _DMA: Direct Memory Access
            {
                Name (RBUF, ResourceTemplate ()
                {
                    QWordMemory (ResourceConsumer, PosDecode, MinFixed,
MaxFixed, Cacheable, ReadWrite,
                        0x0000000000000000, // Granularity
                        0x0000000000000000, // Range Minimum
                        0x00000000FFFFFFFF, // Range Maximum
                        0x0000000000000000, // Translation Offset
                        0x0000000100000000, // Length
                        ,, , AddressRangeMemory, TypeStatic)
                    QWordMemory (ResourceConsumer, PosDecode, MinFixed,
MaxFixed, Cacheable, ReadWrite,
                        0x0000000000000000, // Granularity
                        0x0000006010200000, // Range Minimum
                        0x000000602FFFFFFF, // Range Maximum
                        0x0000000000000000, // Translation Offset
                        0x000000001FE00000, // Length
                        ,, , AddressRangeMemory, TypeStatic)
                    QWordMemory (ResourceConsumer, PosDecode, MinFixed,
MaxFixed, Cacheable, ReadWrite,
                        0x0000000000000000, // Granularity
                        0x00000060F0000000, // Range Minimum
                        0x00000060FFFFFFFF, // Range Maximum
                        0x0000000000000000, // Translation Offset
                        0x0000000010000000, // Length
                        ,, , AddressRangeMemory, TypeStatic)
                    QWordMemory (ResourceConsumer, PosDecode, MinFixed,
MaxFixed, Cacheable, ReadWrite,
                        0x0000000000000000, // Granularity
                        0x0000007000000000, // Range Minimum
                        0x000003FFFFFFFFFF, // Range Maximum
                        0x0000000000000000, // Translation Offset
                        0x0000039000000000, // Length
                        ,, , AddressRangeMemory, TypeStatic)
                })

But the System RAM ranges are:
cat /proc/iomem |grep -i ram
90000000-91ffffff : System RAM
92900000-fffbffff : System RAM
880000000-fffffffff : System RAM
8800000000-bff5990fff : System RAM
bff59d0000-bff5a4ffff : System RAM
bff8000000-bfffffffff : System RAM
So some RAM ranges are out of dma_range_map.

Fix it by checking whether each of the system RAM resources can be
properly encompassed within the dma_range_map.

Signed-off-by: Jia He &lt;justin.he@arm.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>dma-direct: warn when coherent allocations aren't supported</title>
<updated>2023-10-22T14:38:54Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2023-10-06T13:17:54Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=63f067e33c996a6e4d9a34d138116a43105182f1'/>
<id>urn:sha1:63f067e33c996a6e4d9a34d138116a43105182f1</id>
<content type='text'>
Log a warning once when dma_alloc_coherent fails because the platform
does not support coherent allocations at all.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Reviewed-by: Greg Ungerer &lt;gerg@linux-m68k.org&gt;
Tested-by: Greg Ungerer &lt;gerg@linux-m68k.org&gt;
</content>
</entry>
<entry>
<title>dma-direct: simplify the use atomic pool logic in dma_direct_alloc</title>
<updated>2023-10-22T14:38:54Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2023-10-06T13:13:34Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=b1da46d70e5427a9ba5eae28bcca29a20e68a442'/>
<id>urn:sha1:b1da46d70e5427a9ba5eae28bcca29a20e68a442</id>
<content type='text'>
The logic in dma_direct_alloc when to use the atomic pool vs remapping
grew a bit unreadable.  Consolidate it into a single check, and clean
up the set_uncached vs remap logic a bit as well.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Reviewed-by: Greg Ungerer &lt;gerg@linux-m68k.org&gt;
Tested-by: Greg Ungerer &lt;gerg@linux-m68k.org&gt;
</content>
</entry>
<entry>
<title>dma-direct: add a CONFIG_ARCH_HAS_DMA_ALLOC symbol</title>
<updated>2023-10-22T14:38:54Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2023-10-05T07:05:36Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=2c8ed1b960fb97c82ede5afc974329bfdb457f5f'/>
<id>urn:sha1:2c8ed1b960fb97c82ede5afc974329bfdb457f5f</id>
<content type='text'>
Instead of using arch_dma_alloc if none of the generic coherent
allocators are used, require the architectures to explicitly opt into
providing it.  This will used to deal with the case of m68knommu and
coldfire where we can't do any coherent allocations whatsoever, and
also makes it clear that arch_dma_alloc is a last resort.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Reviewed-by: Greg Ungerer &lt;gerg@linux-m68k.org&gt;
Tested-by: Greg Ungerer &lt;gerg@linux-m68k.org&gt;
</content>
</entry>
<entry>
<title>swiotlb: if swiotlb is full, fall back to a transient memory pool</title>
<updated>2023-08-01T16:02:20Z</updated>
<author>
<name>Petr Tesarik</name>
<email>petr.tesarik.ext@huawei.com</email>
</author>
<published>2023-08-01T06:24:01Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=79636caad3618e2b38457f6e298c9b31ba82b489'/>
<id>urn:sha1:79636caad3618e2b38457f6e298c9b31ba82b489</id>
<content type='text'>
Try to allocate a transient memory pool if no suitable slots can be found
and the respective SWIOTLB is allowed to grow. The transient pool is just
enough big for this one bounce buffer. It is inserted into a per-device
list of transient memory pools, and it is freed again when the bounce
buffer is unmapped.

Transient memory pools are kept in an RCU list. A memory barrier is
required after adding a new entry, because any address within a transient
buffer must be immediately recognized as belonging to the SWIOTLB, even if
it is passed to another CPU.

Deletion does not require any synchronization beyond RCU ordering
guarantees. After a buffer is unmapped, its physical addresses may no
longer be passed to the DMA API, so the memory range of the corresponding
stale entry in the RCU list never matches. If the memory range gets
allocated again, then it happens only after a RCU quiescent state.

Since bounce buffers can now be allocated from different pools, add a
parameter to swiotlb_alloc_pool() to let the caller know which memory pool
is used. Add swiotlb_find_pool() to find the memory pool corresponding to
an address. This function is now also used by is_swiotlb_buffer(), because
a simple boundary check is no longer sufficient.

The logic in swiotlb_alloc_tlb() is taken from __dma_direct_alloc_pages(),
simplified and enhanced to use coherent memory pools if needed.

Note that this is not the most efficient way to provide a bounce buffer,
but when a DMA buffer can't be mapped, something may (and will) actually
break. At that point it is better to make an allocation, even if it may be
an expensive operation.

Signed-off-by: Petr Tesarik &lt;petr.tesarik.ext@huawei.com&gt;
Reviewed-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>dma-mapping: name SG DMA flag helpers consistently</title>
<updated>2023-06-19T23:19:22Z</updated>
<author>
<name>Robin Murphy</name>
<email>robin.murphy@arm.com</email>
</author>
<published>2023-06-12T15:31:57Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=cb147bbe22d2be9b49021c2e5dacdf2935745d1c'/>
<id>urn:sha1:cb147bbe22d2be9b49021c2e5dacdf2935745d1c</id>
<content type='text'>
sg_is_dma_bus_address() is inconsistent with the naming pattern of its
corresponding setters and its own kerneldoc, so take the majority vote and
rename it sg_dma_is_bus_address() (and fix up the missing underscores in
the kerneldoc too).  This gives us a nice clear pattern where SG DMA flags
are SG_DMA_&lt;NAME&gt;, and the helpers for acting on them are
sg_dma_&lt;action&gt;_&lt;name&gt;().

Link: https://lkml.kernel.org/r/20230612153201.554742-14-catalin.marinas@arm.com
Signed-off-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Jerry Snitselaar &lt;jsnitsel@redhat.com&gt;
Reviewed-by: Logan Gunthorpe &lt;logang@deltatee.com&gt;
  Link: https://lore.kernel.org/r/fa2eca2862c7ffc41b50337abffb2dfd2864d3ea.1685036694.git.robin.murphy@arm.com
Tested-by: Isaac J. Manjarres &lt;isaacmanjarres@google.com&gt;
Cc: Alasdair Kergon &lt;agk@redhat.com&gt;
Cc: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Cc: Arnd Bergmann &lt;arnd@arndb.de&gt;
Cc: Daniel Vetter &lt;daniel@ffwll.ch&gt;
Cc: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
Cc: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Cc: Joerg Roedel &lt;joro@8bytes.org&gt;
Cc: Jonathan Cameron &lt;jic23@kernel.org&gt;
Cc: Jonathan Cameron &lt;Jonathan.Cameron@huawei.com&gt;
Cc: Lars-Peter Clausen &lt;lars@metafoo.de&gt;
Cc: Marc Zyngier &lt;maz@kernel.org&gt;
Cc: Mark Brown &lt;broonie@kernel.org&gt;
Cc: Mike Snitzer &lt;snitzer@kernel.org&gt;
Cc: "Rafael J. Wysocki" &lt;rafael@kernel.org&gt;
Cc: Saravana Kannan &lt;saravanak@google.com&gt;
Cc: Vlastimil Babka &lt;vbabka@suse.cz&gt;
Cc: Will Deacon &lt;will@kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>dma-direct: cleanup parameters to dma_direct_optimal_gfp_mask</title>
<updated>2023-03-28T01:34:05Z</updated>
<author>
<name>Petr Tesarik</name>
<email>petrtesarik@huaweicloud.com</email>
</author>
<published>2023-02-20T15:06:22Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=25a4ce564921db0973b74c04e0ea23bd2ee12a3a'/>
<id>urn:sha1:25a4ce564921db0973b74c04e0ea23bd2ee12a3a</id>
<content type='text'>
Since both callers of dma_direct_optimal_gfp_mask() pass
dev-&gt;coherent_dma_mask as the second argument, it is better to
remove that parameter altogether.

Not only is reducing number of parameters good for readability, but
the new function signature is also more logical: The optimal flags
depend only on data contained in struct device.

While touching this code, let's also rename phys_mask to phys_limit
in dma_direct_alloc_from_pool(), because it is indeed a limit.

Signed-off-by: Petr Tesarik &lt;petrtesarik@huaweicloud.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>Merge tag 'dma-mapping-5.20-2022-08-06' of git://git.infradead.org/users/hch/dma-mapping</title>
<updated>2022-08-06T17:56:45Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2022-08-06T17:56:45Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=c993e07be023acdeec8e84e2e0743c52adb5fc94'/>
<id>urn:sha1:c993e07be023acdeec8e84e2e0743c52adb5fc94</id>
<content type='text'>
Pull dma-mapping updates from Christoph Hellwig:

 - convert arm32 to the common dma-direct code (Arnd Bergmann, Robin
   Murphy, Christoph Hellwig)

 - restructure the PCIe peer to peer mapping support (Logan Gunthorpe)

 - allow the IOMMU code to communicate an optional DMA mapping length
   and use that in scsi and libata (John Garry)

 - split the global swiotlb lock (Tianyu Lan)

 - various fixes and cleanup (Chao Gao, Dan Carpenter, Dongli Zhang,
   Lukas Bulwahn, Robin Murphy)

* tag 'dma-mapping-5.20-2022-08-06' of git://git.infradead.org/users/hch/dma-mapping: (45 commits)
  swiotlb: fix passing local variable to debugfs_create_ulong()
  dma-mapping: reformat comment to suppress htmldoc warning
  PCI/P2PDMA: Remove pci_p2pdma_[un]map_sg()
  RDMA/rw: drop pci_p2pdma_[un]map_sg()
  RDMA/core: introduce ib_dma_pci_p2p_dma_supported()
  nvme-pci: convert to using dma_map_sgtable()
  nvme-pci: check DMA ops when indicating support for PCI P2PDMA
  iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg
  iommu: Explicitly skip bus address marked segments in __iommu_map_sg()
  dma-mapping: add flags to dma_map_ops to indicate PCI P2PDMA support
  dma-direct: support PCI P2PDMA pages in dma-direct map_sg
  dma-mapping: allow EREMOTEIO return code for P2PDMA transfers
  PCI/P2PDMA: Introduce helpers for dma_map_sg implementations
  PCI/P2PDMA: Attempt to set map_type if it has not been set
  lib/scatterlist: add flag for indicating P2PDMA segments in an SGL
  swiotlb: clean up some coding style and minor issues
  dma-mapping: update comment after dmabounce removal
  scsi: sd: Add a comment about limiting max_sectors to shost optimal limit
  ata: libata-scsi: cap ata_device-&gt;max_sectors according to shost-&gt;max_sectors
  scsi: scsi_transport_sas: cap shost opt_sectors according to DMA optimal limit
  ...
</content>
</entry>
<entry>
<title>dma-direct: support PCI P2PDMA pages in dma-direct map_sg</title>
<updated>2022-07-26T11:27:47Z</updated>
<author>
<name>Logan Gunthorpe</name>
<email>logang@deltatee.com</email>
</author>
<published>2022-07-08T16:50:56Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=f02ad36d4f76645e7e1c21f572260e9a2e61c26b'/>
<id>urn:sha1:f02ad36d4f76645e7e1c21f572260e9a2e61c26b</id>
<content type='text'>
Add PCI P2PDMA support for dma_direct_map_sg() so that it can map
PCI P2PDMA pages directly without a hack in the callers. This allows
for heterogeneous SGLs that contain both P2PDMA and regular pages.

A P2PDMA page may have three possible outcomes when being mapped:
  1) If the data path between the two devices doesn't go through the
     root port, then it should be mapped with a PCI bus address
  2) If the data path goes through the host bridge, it should be mapped
     normally, as though it were a CPU physical address
  3) It is not possible for the two devices to communicate and thus
     the mapping operation should fail (and it will return -EREMOTEIO).

SGL segments that contain PCI bus addresses are marked with
sg_dma_mark_pci_p2pdma() and are ignored when unmapped.

P2PDMA mappings are also failed if swiotlb needs to be used on the
mapping.

Signed-off-by: Logan Gunthorpe &lt;logang@deltatee.com&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>dma-direct: use the correct size for dma_set_encrypted()</title>
<updated>2022-06-23T13:26:59Z</updated>
<author>
<name>Dexuan Cui</name>
<email>decui@microsoft.com</email>
</author>
<published>2022-06-22T19:14:24Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=3be4562584bba603f33863a00c1c32eecf772ee6'/>
<id>urn:sha1:3be4562584bba603f33863a00c1c32eecf772ee6</id>
<content type='text'>
The third parameter of dma_set_encrypted() is a size in bytes rather than
the number of pages.

Fixes: 4d0564785bb0 ("dma-direct: factor out dma_set_{de,en}crypted helpers")
Signed-off-by: Dexuan Cui &lt;decui@microsoft.com&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
</feed>
