<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h, branch v4.20</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v4.20</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v4.20'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2018-09-13T20:14:12Z</updated>
<entry>
<title>drm/amdgpu: use a single linked list for amdgpu_vm_bo_base</title>
<updated>2018-09-13T20:14:12Z</updated>
<author>
<name>Christian König</name>
<email>christian.koenig@amd.com</email>
</author>
<published>2018-09-10T18:02:46Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=646b90259842faa8341b076a3488a227927d84a2'/>
<id>urn:sha1:646b90259842faa8341b076a3488a227927d84a2</id>
<content type='text'>
Instead of the double linked list. Gets the size of amdgpu_vm_pt down to
64 bytes again.

We could even reduce it down to 32 bytes, but that would require some
rather extreme hacks.

Signed-off-by: Christian König &lt;christian.koenig@amd.com&gt;
Reviewed-by: Chunming Zhou &lt;david1.zhou@amd.com&gt;
Acked-by: Felix Kuehling &lt;Felix.Kuehling@amd.com&gt;
Signed-off-by: Alex Deucher &lt;alexander.deucher@amd.com&gt;
</content>
</entry>
<entry>
<title>drm/amdgpu: Move fault hash table to amdgpu vm</title>
<updated>2018-09-12T21:28:53Z</updated>
<author>
<name>Oak Zeng</name>
<email>Oak.Zeng@amd.com</email>
</author>
<published>2018-09-06T03:51:23Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=240cd9a64226e013ac1a608ebf720a1813790196'/>
<id>urn:sha1:240cd9a64226e013ac1a608ebf720a1813790196</id>
<content type='text'>
In stead of share one fault hash table per device, make it
per vm. This can avoid inter-process lock issue when fault
hash table is full.

Change-Id: I5d1281b7c41eddc8e26113e010516557588d3708
Signed-off-by: Oak Zeng &lt;Oak.Zeng@amd.com&gt;
Suggested-by: Christian Konig &lt;Christian.Koenig@amd.com&gt;
Suggested-by: Felix Kuehling &lt;Felix.Kuehling@amd.com&gt;
Reviewed-by: Christian Konig &lt;christian.koenig@amd.com&gt;
Signed-off-by: Alex Deucher &lt;alexander.deucher@amd.com&gt;
</content>
</entry>
<entry>
<title>drm/amdgpu: correctly sign extend 48bit addresses v3</title>
<updated>2018-09-11T03:41:24Z</updated>
<author>
<name>Christian König</name>
<email>christian.koenig@amd.com</email>
</author>
<published>2018-08-27T16:22:31Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=ad9a5b78f585e9a9bd5ad06dfaf1269659a99f43'/>
<id>urn:sha1:ad9a5b78f585e9a9bd5ad06dfaf1269659a99f43</id>
<content type='text'>
Correct sign extend the GMC addresses to 48bit.

v2: sign extending turned out easier than thought.
v3: clean up the defines and move them into amdgpu_gmc.h as well

Signed-off-by: Christian König &lt;christian.koenig@amd.com&gt;
Reviewed-by: Junwei Zhang &lt;Jerry.Zhang@amd.com&gt;
Signed-off-by: Alex Deucher &lt;alexander.deucher@amd.com&gt;
</content>
</entry>
<entry>
<title>drm/amdgpu: separate per VM BOs from normal in the moved state</title>
<updated>2018-09-11T03:40:16Z</updated>
<author>
<name>Christian König</name>
<email>christian.koenig@amd.com</email>
</author>
<published>2018-09-01T11:25:31Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=c12a2ee5d002e39a387001cdb5065b560568b4f5'/>
<id>urn:sha1:c12a2ee5d002e39a387001cdb5065b560568b4f5</id>
<content type='text'>
Allows us to avoid taking the spinlock in more places.

Signed-off-by: Christian König &lt;christian.koenig@amd.com&gt;
Reviewed-by: Junwei Zhang &lt;Jerry.Zhang@amd.com&gt;
Signed-off-by: Alex Deucher &lt;alexander.deucher@amd.com&gt;
</content>
</entry>
<entry>
<title>drm/amdkfd: Release an acquired process vm</title>
<updated>2018-08-29T17:35:00Z</updated>
<author>
<name>Oak Zeng</name>
<email>Oak.Zeng@amd.com</email>
</author>
<published>2018-08-27T19:18:36Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=bf47afbabf1cf149f9ebc8e1f7dab6913e360dc4'/>
<id>urn:sha1:bf47afbabf1cf149f9ebc8e1f7dab6913e360dc4</id>
<content type='text'>
For compute vm acquired from amdgpu, vm.pasid is managed
by kfd. Decouple pasid from such vm on process destroy
to avoid duplicate pasid release.

Signed-off-by: Oak Zeng &lt;Oak.Zeng@amd.com&gt;
Reviewed-by: Felix Kuehling &lt;Felix.Kuehling@amd.com&gt;
Signed-off-by: Alex Deucher &lt;alexander.deucher@amd.com&gt;
</content>
</entry>
<entry>
<title>drm/amdgpu: Set pasid for compute vm (v2)</title>
<updated>2018-08-29T17:34:49Z</updated>
<author>
<name>Oak Zeng</name>
<email>Oak.Zeng@amd.com</email>
</author>
<published>2018-08-29T17:33:52Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=1685b01a858872075bc258a350153de0c7e95404'/>
<id>urn:sha1:1685b01a858872075bc258a350153de0c7e95404</id>
<content type='text'>
To make a amdgpu vm to a compute vm, the old pasid will be freed and
replaced with a pasid managed by kfd. Kfd can't reuse original pasid
allocated by amdgpu because kfd uses different pasid policy with amdgpu.
For example, all graphic devices share one same pasid in a process.

v2: rebase (Alex)

Signed-off-by: Oak Zeng &lt;Oak.Zeng@amd.com&gt;
Reviewed-by: Felix Kuehling &lt;Felix.Kuehling@amd.com&gt;
Signed-off-by: Alex Deucher &lt;alexander.deucher@amd.com&gt;
</content>
</entry>
<entry>
<title>drm/amdgpu: remove extra root PD alignment</title>
<updated>2018-08-27T20:12:23Z</updated>
<author>
<name>Christian König</name>
<email>christian.koenig@amd.com</email>
</author>
<published>2018-08-22T13:47:37Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=248f2b8ef25c9505fc763d42bf5e2c9fcf94fd16'/>
<id>urn:sha1:248f2b8ef25c9505fc763d42bf5e2c9fcf94fd16</id>
<content type='text'>
Just another leftover from radeon.

Signed-off-by: Christian König &lt;christian.koenig@amd.com&gt;
Reviewed-by: Alex Deucher &lt;alexander.deucher@amd.com&gt;
Reviewed-by: Junwei Zhang &lt;Jerry.Zhang@amd.com&gt;
Acked-by: Huang Rui &lt;ray.huang@amd.com&gt;
Signed-off-by: Alex Deucher &lt;alexander.deucher@amd.com&gt;
</content>
</entry>
<entry>
<title>drm/amdgpu: Adjust the VM size based on system memory size v2</title>
<updated>2018-08-27T20:10:42Z</updated>
<author>
<name>Felix Kuehling</name>
<email>Felix.Kuehling@amd.com</email>
</author>
<published>2018-08-21T21:14:32Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=43370c4ce5c6a1fae84b58f67f7834902ee74b7c'/>
<id>urn:sha1:43370c4ce5c6a1fae84b58f67f7834902ee74b7c</id>
<content type='text'>
Set the VM size based on system memory size between the ASIC-specific
limits given by min_vm_size and max_bits. GFXv9 GPUs will keep their
default VM size of 256TB (48 bit). Only older GPUs will adjust VM size
depending on system memory size.

This makes more VM space available for ROCm applications on GFXv8 GPUs
that want to map all available VRAM and system memory in their SVM
address space.

v2:
* Clarify comment
* Round up memory size before &gt;&gt; 30
* Round up automatic vm_size to power of two

Signed-off-by: Felix Kuehling &lt;Felix.Kuehling@amd.com&gt;
Acked-by: Junwei Zhang &lt;Jerry.Zhang@amd.com&gt;
Reviewed-by: Huang Rui &lt;ray.huang@amd.com&gt;
Reviewed-by: Christian König &lt;christian.koenig@amd.com&gt;
Signed-off-by: Alex Deucher &lt;alexander.deucher@amd.com&gt;
</content>
</entry>
<entry>
<title>drm/amdgpu: use bulk moves for efficient VM LRU handling (v6)</title>
<updated>2018-08-27T16:11:22Z</updated>
<author>
<name>Huang Rui</name>
<email>ray.huang@amd.com</email>
</author>
<published>2018-08-06T02:57:08Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=f921661bd4a112f80d57bbfb3e792da63787f4b0'/>
<id>urn:sha1:f921661bd4a112f80d57bbfb3e792da63787f4b0</id>
<content type='text'>
I continue to work for bulk moving that based on the proposal by Christian.

Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriously.

Then Christian provided a workaround to not move PD/PT BOs on LRU with below
patch:
Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
validating VM PTs")

However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
instead of one by one.

Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
validated we move all BOs together to the end of the LRU without dropping the
lock for the LRU.

While doing so we note the beginning and end of this block in the LRU list.

Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
we don't move every BO one by one, but instead cut the LRU list into pieces so
that we bulk move everything to the end in just one operation.

Test data:
+--------------+-----------------+-----------+---------------------------------------+
|              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
|              |Principle(Vulkan)|           |                                       |
+------------------------------------------------------------------------------------+
|              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
| Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
+------------------------------------------------------------------------------------+
| Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
|(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
|PT BOs on LRU)|                 |           |                                       |
+------------------------------------------------------------------------------------+
| Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
|              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
+--------------+-----------------+-----------+---------------------------------------+

After test them with above three benchmarks include vulkan and opencl. We can
see the visible improvement than original, and even better than original with
workaround.

v2: move all BOs include idle, relocated, and moved list to the end of LRU and
put them together.
v3: remove unused parameter and use list_for_each_entry instead of the one with
save entry.
v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
all bo will be back on idle list.
v5: remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
validated, and move ttm_bo_bulk_move_lru_tail() also into
amdgpu_vm_move_to_lru_tail().
v6: clean up and fix return value.

Signed-off-by: Christian König &lt;christian.koenig@amd.com&gt;
Signed-off-by: Huang Rui &lt;ray.huang@amd.com&gt;
Tested-by: Mike Lothian &lt;mike@fireburn.co.uk&gt;
Tested-by: Dieter Nützel &lt;Dieter@nuetzel-hh.de&gt;
Acked-by: Chunming Zhou &lt;david1.zhou@amd.com&gt;
Reviewed-by: Junwei Zhang &lt;Jerry.Zhang@amd.com&gt;
Signed-off-by: Alex Deucher &lt;alexander.deucher@amd.com&gt;
</content>
</entry>
<entry>
<title>drm/amdgpu: use new scheduler load balancing for VMs</title>
<updated>2018-08-27T16:10:45Z</updated>
<author>
<name>Christian König</name>
<email>christian.koenig@amd.com</email>
</author>
<published>2018-07-12T13:15:21Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=3798e9a6e6390b873a745d6240ac9646bd2bf514'/>
<id>urn:sha1:3798e9a6e6390b873a745d6240ac9646bd2bf514</id>
<content type='text'>
Instead of the fixed round robin use let the scheduler balance the load
of page table updates.

Signed-off-by: Christian König &lt;christian.koenig@amd.com&gt;
Reviewed-by: Chunming Zhou &lt;david1.zhou@amd.com&gt;
Signed-off-by: Alex Deucher &lt;alexander.deucher@amd.com&gt;
</content>
</entry>
</feed>
