<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/dma, branch v5.3</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v5.3</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v5.3'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2019-08-20T22:14:10Z</updated>
<entry>
<title>dma-direct: fix zone selection after an unaddressable CMA allocation</title>
<updated>2019-08-20T22:14:10Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2019-08-20T02:45:49Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=90ae409f9eb3bcaf38688f9ec22375816053a08e'/>
<id>urn:sha1:90ae409f9eb3bcaf38688f9ec22375816053a08e</id>
<content type='text'>
The new dma_alloc_contiguous hides if we allocate CMA or regular
pages, and thus fails to retry a ZONE_NORMAL allocation if the CMA
allocation succeeds but isn't addressable.  That means we either fail
outright or dip into a small zone that might not succeed either.

Thanks to Hillf Danton for debugging this issue.

Fixes: b1d2dc009dec ("dma-contiguous: add dma_{alloc,free}_contiguous() helpers")
Reported-by: Tobias Klausmann &lt;tobias.johannes.klausmann@mni.thm.de&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Tested-by: Tobias Klausmann &lt;tobias.johannes.klausmann@mni.thm.de&gt;
</content>
</entry>
<entry>
<title>dma-mapping: fix page attributes for dma_mmap_*</title>
<updated>2019-08-10T17:52:45Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2019-07-26T07:26:40Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=33dcb37cef741294b481f4d889a465b8091f11bf'/>
<id>urn:sha1:33dcb37cef741294b481f4d889a465b8091f11bf</id>
<content type='text'>
All the way back to introducing dma_common_mmap we've defaulted to mark
the pages as uncached.  But this is wrong for DMA coherent devices.
Later on DMA_ATTR_WRITE_COMBINE also got incorrect treatment as that
flag is only treated special on the alloc side for non-coherent devices.

Introduce a new dma_pgprot helper that deals with the check for coherent
devices so that only the remapping cases ever reach arch_dma_mmap_pgprot
and we thus ensure no aliasing of page attributes happens, which makes
the powerpc version of arch_dma_mmap_pgprot obsolete and simplifies the
remaining ones.

Note that this means arch_dma_mmap_pgprot is a bit misnamed now, but
we'll phase it out soon.

Fixes: 64ccc9c033c6 ("common: dma-mapping: add support for generic dma_mmap_* calls")
Reported-by: Shawn Anastasio &lt;shawn@anastas.io&gt;
Reported-by: Gavin Li &lt;git@thegavinli.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Acked-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt; # arm64
</content>
</entry>
<entry>
<title>dma-direct: don't truncate dma_required_mask to bus addressing capabilities</title>
<updated>2019-08-10T17:52:45Z</updated>
<author>
<name>Lucas Stach</name>
<email>l.stach@pengutronix.de</email>
</author>
<published>2019-08-05T15:51:53Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=d8ad55538abe443919e20e0bb996561bca9cad84'/>
<id>urn:sha1:d8ad55538abe443919e20e0bb996561bca9cad84</id>
<content type='text'>
The dma required_mask needs to reflect the actual addressing capabilities
needed to handle the whole system RAM. When truncated down to the bus
addressing capabilities dma_addressing_limited() will incorrectly signal
no limitations for devices which are restricted by the bus_dma_mask.

Fixes: b4ebe6063204 (dma-direct: implement complete bus_dma_mask handling)
Signed-off-by: Lucas Stach &lt;l.stach@pengutronix.de&gt;
Tested-by: Atish Patra &lt;atish.patra@wdc.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>dma-direct: fix DMA_ATTR_NO_KERNEL_MAPPING</title>
<updated>2019-08-10T17:52:45Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2019-08-06T11:33:23Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=cf14be0b41c659ede89abef3f7ec0e98e6cfea5b'/>
<id>urn:sha1:cf14be0b41c659ede89abef3f7ec0e98e6cfea5b</id>
<content type='text'>
The new DMA_ATTR_NO_KERNEL_MAPPING needs to actually assign
a dma_addr to work.  Also skip it if the architecture needs
forced decryption handling, as that needs a kernel virtual
address.

Fixes: d98849aff879 (dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING in common code)
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Lucas Stach &lt;l.stach@pengutronix.de&gt;
</content>
</entry>
<entry>
<title>Merge tag 'arm-swiotlb-5.3' of git://git.infradead.org/users/hch/dma-mapping</title>
<updated>2019-08-02T15:44:33Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2019-08-02T15:44:33Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=234172f6bbf8e26fa8407c4bbbf2a36da30d7913'/>
<id>urn:sha1:234172f6bbf8e26fa8407c4bbbf2a36da30d7913</id>
<content type='text'>
Pull arm swiotlb support from Christoph Hellwig:
 "This fixes a cascade of regressions that originally started with the
  addition of the ia64 port, but only got fatal once we removed most
  uses of block layer bounce buffering in Linux 4.18.

  The reason is that while the original i386/PAE code that was the first
  architecture that supported &gt; 4GB of memory without an iommu decided
  to leave bounce buffering to the subsystems, which in those days just
  mean block and networking as no one else consumed arbitrary userspace
  memory.

  Later with ia64, x86_64 and other ports we assumed that either an
  iommu or something that fakes it up ("software IOTLB" in beautiful
  Intel speak) is present and that subsystems can rely on that for
  dealing with addressing limitations in devices. Except that the ARM
  LPAE scheme that added larger physical address to 32-bit ARM did not
  follow that scheme and thus only worked by chance and only for block
  and networking I/O directly to highmem.

  Long story, short fix - add swiotlb support to arm when build for LPAE
  platforms, which actuallys turns out to be pretty trivial with the
  modern dma-direct / swiotlb code to fix the Linux 4.18-ish regression"

* tag 'arm-swiotlb-5.3' of git://git.infradead.org/users/hch/dma-mapping:
  arm: use swiotlb for bounce buffering on LPAE configs
  dma-mapping: check pfn validity in dma_common_{mmap,get_sgtable}
</content>
</entry>
<entry>
<title>dma-contiguous: page-align the size in dma_free_contiguous()</title>
<updated>2019-07-29T06:50:04Z</updated>
<author>
<name>Nicolin Chen</name>
<email>nicoleotsuka@gmail.com</email>
</author>
<published>2019-07-26T19:34:33Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=f46cc0152501e46d1b3aa5e7eade61145070eab0'/>
<id>urn:sha1:f46cc0152501e46d1b3aa5e7eade61145070eab0</id>
<content type='text'>
According to the original dma_direct_alloc_pages() code:
{
	unsigned int count = PAGE_ALIGN(size) &gt;&gt; PAGE_SHIFT;

	if (!dma_release_from_contiguous(dev, page, count))
		__free_pages(page, get_order(size));
}

The count parameter for dma_release_from_contiguous() was page
aligned before the right-shifting operation, while the new API
dma_free_contiguous() forgets to have PAGE_ALIGN() at the size.

So this patch simply adds it to prevent any corner case.

Fixes: fdaeec198ada ("dma-contiguous: add dma_{alloc,free}_contiguous() helpers")
Signed-off-by: Nicolin Chen &lt;nicoleotsuka@gmail.com&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>dma-contiguous: do not overwrite align in dma_alloc_contiguous()</title>
<updated>2019-07-29T06:50:04Z</updated>
<author>
<name>Nicolin Chen</name>
<email>nicoleotsuka@gmail.com</email>
</author>
<published>2019-07-26T19:34:32Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=c6622a425acd1d2f3a443cd39b490a8777b622d7'/>
<id>urn:sha1:c6622a425acd1d2f3a443cd39b490a8777b622d7</id>
<content type='text'>
The dma_alloc_contiguous() limits align at CONFIG_CMA_ALIGNMENT for
cma_alloc() however it does not restore it for the fallback routine.
This will result in a size mismatch between the allocation and free
when running into the fallback routines after cma_alloc() fails, if
the align is larger than CONFIG_CMA_ALIGNMENT.

This patch adds a cma_align to take care of cma_alloc() and prevent
the align from being overwritten.

Fixes: fdaeec198ada ("dma-contiguous: add dma_{alloc,free}_contiguous() helpers")
Reported-by: Dafna Hirschfeld &lt;dafna.hirschfeld@collabora.com&gt;
Signed-off-by: Nicolin Chen &lt;nicoleotsuka@gmail.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>dma-mapping: check pfn validity in dma_common_{mmap,get_sgtable}</title>
<updated>2019-07-24T15:28:54Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2019-07-08T18:51:56Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=66d7780f18eae0232827fcffeaded39a6a168236'/>
<id>urn:sha1:66d7780f18eae0232827fcffeaded39a6a168236</id>
<content type='text'>
Check that the pfn returned from arch_dma_coherent_to_pfn refers to
a valid page and reject the mmap / get_sgtable requests otherwise.

Based on the arm implementation of the mmap and get_sgtable methods.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Tested-by: Vignesh Raghavendra &lt;vigneshr@ti.com&gt;
</content>
</entry>
<entry>
<title>Merge tag 'dma-mapping-5.3-1' of git://git.infradead.org/users/hch/dma-mapping</title>
<updated>2019-07-20T19:09:52Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2019-07-20T19:09:52Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=ac60602a6d8f6830dee89f4b87ee005f62eb7171'/>
<id>urn:sha1:ac60602a6d8f6830dee89f4b87ee005f62eb7171</id>
<content type='text'>
Pull dma-mapping fixes from Christoph Hellwig:
 "Fix various regressions:

   - force unencrypted dma-coherent buffers if encryption bit can't fit
     into the dma coherent mask (Tom Lendacky)

   - avoid limiting request size if swiotlb is not used (me)

   - fix swiotlb handling in dma_direct_sync_sg_for_cpu/device (Fugang
     Duan)"

* tag 'dma-mapping-5.3-1' of git://git.infradead.org/users/hch/dma-mapping:
  dma-direct: correct the physical addr in dma_direct_sync_sg_for_cpu/device
  dma-direct: only limit the mapping size if swiotlb could be used
  dma-mapping: add a dma_addressing_limited helper
  dma-direct: Force unencrypted DMA under SME for certain DMA masks
</content>
</entry>
<entry>
<title>dma-direct: correct the physical addr in dma_direct_sync_sg_for_cpu/device</title>
<updated>2019-07-19T12:09:40Z</updated>
<author>
<name>Fugang Duan</name>
<email>fugang.duan@nxp.com</email>
</author>
<published>2019-07-19T09:26:48Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=449fa54d6815be8c2c1f68fa9dbbae9384a7c03e'/>
<id>urn:sha1:449fa54d6815be8c2c1f68fa9dbbae9384a7c03e</id>
<content type='text'>
dma_map_sg() may use swiotlb buffer when the kernel command line includes
"swiotlb=force" or the dma_addr is out of dev-&gt;dma_mask range.  After
DMA complete the memory moving from device to memory, then user call
dma_sync_sg_for_cpu() to sync with DMA buffer, and copy the original
virtual buffer to other space.

So dma_direct_sync_sg_for_cpu() should use swiotlb physical addr, not
the original physical addr from sg_phys(sg).

dma_direct_sync_sg_for_device() also has the same issue, correct it as
well.

Fixes: 55897af63091("dma-direct: merge swiotlb_dma_ops into the dma_direct code")
Signed-off-by: Fugang Duan &lt;fugang.duan@nxp.com&gt;
Reviewed-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
</feed>
