<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/drivers/android/binder_alloc.c, branch v6.16</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v6.16</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v6.16'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2024-12-24T08:35:23Z</updated>
<entry>
<title>binder: use per-vma lock in page reclaiming</title>
<updated>2024-12-24T08:35:23Z</updated>
<author>
<name>Carlos Llamas</name>
<email>cmllamas@google.com</email>
</author>
<published>2024-12-10T14:31:05Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=95bc2d4a9020efcd7858c91e68e9f4e842e3e8c8'/>
<id>urn:sha1:95bc2d4a9020efcd7858c91e68e9f4e842e3e8c8</id>
<content type='text'>
Use per-vma locking in the shrinker's callback when reclaiming pages,
similar to the page installation logic. This minimizes contention with
unrelated vmas improving performance. The mmap_sem is still acquired if
the per-vma lock cannot be obtained.

Cc: Suren Baghdasaryan &lt;surenb@google.com&gt;
Suggested-by: Liam R. Howlett &lt;Liam.Howlett@oracle.com&gt;
Reviewed-by: Suren Baghdasaryan &lt;surenb@google.com&gt;
Signed-off-by: Carlos Llamas &lt;cmllamas@google.com&gt;
Link: https://lore.kernel.org/r/20241210143114.661252-10-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>binder: propagate vm_insert_page() errors</title>
<updated>2024-12-24T08:35:23Z</updated>
<author>
<name>Carlos Llamas</name>
<email>cmllamas@google.com</email>
</author>
<published>2024-12-10T14:31:04Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=978ce3ed703db86344e1df718ea0f56ec7d4dae1'/>
<id>urn:sha1:978ce3ed703db86344e1df718ea0f56ec7d4dae1</id>
<content type='text'>
Instead of always overriding errors with -ENOMEM, propagate the specific
error code returned by vm_insert_page(). This allows for more accurate
error logs and handling.

Cc: Suren Baghdasaryan &lt;surenb@google.com&gt;
Reviewed-by: Suren Baghdasaryan &lt;surenb@google.com&gt;
Signed-off-by: Carlos Llamas &lt;cmllamas@google.com&gt;
Link: https://lore.kernel.org/r/20241210143114.661252-9-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>binder: use per-vma lock in page installation</title>
<updated>2024-12-24T08:35:23Z</updated>
<author>
<name>Carlos Llamas</name>
<email>cmllamas@google.com</email>
</author>
<published>2024-12-10T14:31:03Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=9e2aa76549b9fd2b8f7b81260417a4ec853910e6'/>
<id>urn:sha1:9e2aa76549b9fd2b8f7b81260417a4ec853910e6</id>
<content type='text'>
Use per-vma locking for concurrent page installations, this minimizes
contention with unrelated vmas improving performance. The mmap_lock is
still acquired when needed though, e.g. before get_user_pages_remote().

Many thanks to Barry Song who posted a similar approach [1].

Link: https://lore.kernel.org/all/20240902225009.34576-1-21cnbao@gmail.com/ [1]
Cc: Nhat Pham &lt;nphamcs@gmail.com&gt;
Cc: Johannes Weiner &lt;hannes@cmpxchg.org&gt;
Cc: Barry Song &lt;v-songbaohua@oppo.com&gt;
Cc: Suren Baghdasaryan &lt;surenb@google.com&gt;
Cc: Hillf Danton &lt;hdanton@sina.com&gt;
Cc: Lorenzo Stoakes &lt;lorenzo.stoakes@oracle.com&gt;
Reviewed-by: Suren Baghdasaryan &lt;surenb@google.com&gt;
Signed-off-by: Carlos Llamas &lt;cmllamas@google.com&gt;
Link: https://lore.kernel.org/r/20241210143114.661252-8-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>binder: rename alloc-&gt;buffer to vm_start</title>
<updated>2024-12-24T08:35:23Z</updated>
<author>
<name>Carlos Llamas</name>
<email>cmllamas@google.com</email>
</author>
<published>2024-12-10T14:31:02Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=0a7bf6866d416e4f8f452419410359b6a82639d1'/>
<id>urn:sha1:0a7bf6866d416e4f8f452419410359b6a82639d1</id>
<content type='text'>
The alloc-&gt;buffer field in struct binder_alloc stores the starting
address of the mapped vma, rename this field to alloc-&gt;vm_start to
better reflect its purpose. It also avoids confusion with the binder
buffer concept, e.g. transaction-&gt;buffer.

No functional changes in this patch.

Reviewed-by: Suren Baghdasaryan &lt;surenb@google.com&gt;
Signed-off-by: Carlos Llamas &lt;cmllamas@google.com&gt;
Link: https://lore.kernel.org/r/20241210143114.661252-7-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>binder: replace alloc-&gt;vma with alloc-&gt;mapped</title>
<updated>2024-12-24T08:35:23Z</updated>
<author>
<name>Carlos Llamas</name>
<email>cmllamas@google.com</email>
</author>
<published>2024-12-10T14:31:01Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=072010abc3ad98bc20198dbe60ef13233a0a357c'/>
<id>urn:sha1:072010abc3ad98bc20198dbe60ef13233a0a357c</id>
<content type='text'>
It is unsafe to use alloc-&gt;vma outside of the mmap_sem. Instead, add a
new boolean alloc-&gt;mapped to save the vma state (mapped or unmmaped) and
use this as a replacement for alloc-&gt;vma to validate several paths.

Using the alloc-&gt;vma caused several performance and security issues in
the past. Now that it has been replaced with either vm_lookup() or the
alloc-&gt;mapped state, we can finally remove it.

Cc: Minchan Kim &lt;minchan@kernel.org&gt;
Cc: Liam R. Howlett &lt;Liam.Howlett@oracle.com&gt;
Cc: Matthew Wilcox &lt;willy@infradead.org&gt;
Cc: Suren Baghdasaryan &lt;surenb@google.com&gt;
Reviewed-by: Suren Baghdasaryan &lt;surenb@google.com&gt;
Signed-off-by: Carlos Llamas &lt;cmllamas@google.com&gt;
Link: https://lore.kernel.org/r/20241210143114.661252-6-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>binder: store shrinker metadata under page-&gt;private</title>
<updated>2024-12-24T08:35:23Z</updated>
<author>
<name>Carlos Llamas</name>
<email>cmllamas@google.com</email>
</author>
<published>2024-12-10T14:31:00Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=f909f0308267dc49fbf122f60e1ec7ddcd1b92c7'/>
<id>urn:sha1:f909f0308267dc49fbf122f60e1ec7ddcd1b92c7</id>
<content type='text'>
Instead of pre-allocating an entire array of struct binder_lru_page in
alloc-&gt;pages, install the shrinker metadata under page-&gt;private. This
ensures the memory is allocated and released as needed alongside pages.

By converting the alloc-&gt;pages[] into an array of struct page pointers,
we can access these pages directly and only reference the shrinker
metadata where it's being used (e.g. inside the shrinker's callback).

Rename struct binder_lru_page to struct binder_shrinker_mdata to better
reflect its purpose. Add convenience functions that wrap the allocation
and freeing of pages along with their shrinker metadata.

Note I've reworked this patch to avoid using page-&gt;lru and page-&gt;index
directly, as Matthew pointed out that these are being removed [1].

Link: https://lore.kernel.org/all/ZzziucEm3np6e7a0@casper.infradead.org/ [1]
Cc: Matthew Wilcox &lt;willy@infradead.org&gt;
Cc: Liam R. Howlett &lt;Liam.Howlett@oracle.com&gt;
Reviewed-by: Suren Baghdasaryan &lt;surenb@google.com&gt;
Signed-off-by: Carlos Llamas &lt;cmllamas@google.com&gt;
Link: https://lore.kernel.org/r/20241210143114.661252-5-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>binder: select correct nid for pages in LRU</title>
<updated>2024-12-24T08:35:23Z</updated>
<author>
<name>Carlos Llamas</name>
<email>cmllamas@google.com</email>
</author>
<published>2024-12-10T14:30:59Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=49d2562c804fc4f43342b3254fe6fb87365c9046'/>
<id>urn:sha1:49d2562c804fc4f43342b3254fe6fb87365c9046</id>
<content type='text'>
The numa node id for binder pages is currently being derived from the
lru entry under struct binder_lru_page. However, this object doesn't
reflect the node id of the struct page items allocated separately.

Instead, select the correct node id from the page itself. This was made
possible since commit 0a97c01cd20b ("list_lru: allow explicit memcg and
NUMA node selection").

Cc: Nhat Pham &lt;nphamcs@gmail.com&gt;
Cc: Johannes Weiner &lt;hannes@cmpxchg.org&gt;
Cc: Suren Baghdasaryan &lt;surenb@google.com&gt;
Reviewed-by: Suren Baghdasaryan &lt;surenb@google.com&gt;
Signed-off-by: Carlos Llamas &lt;cmllamas@google.com&gt;
Link: https://lore.kernel.org/r/20241210143114.661252-4-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>binder: concurrent page installation</title>
<updated>2024-12-24T08:35:23Z</updated>
<author>
<name>Carlos Llamas</name>
<email>cmllamas@google.com</email>
</author>
<published>2024-12-10T14:30:58Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=d1716b4b78fb392a5514c8863e8ba287cc4580c2'/>
<id>urn:sha1:d1716b4b78fb392a5514c8863e8ba287cc4580c2</id>
<content type='text'>
Allow multiple callers to install pages simultaneously by switching the
mmap_sem from write-mode to read-mode. Races to the same PTE are handled
using get_user_pages_remote() to retrieve the already installed page.
This method significantly reduces contention in the mmap semaphore.

To ensure safety, vma_lookup() is used (instead of alloc-&gt;vma) to avoid
operating on an isolated VMA. In addition, zap_page_range_single() is
called under the alloc-&gt;mutex to avoid racing with the shrinker.

Many thanks to Barry Song who posted a similar approach [1].

Link: https://lore.kernel.org/all/20240902225009.34576-1-21cnbao@gmail.com/ [1]
Cc: David Hildenbrand &lt;david@redhat.com&gt;
Cc: Barry Song &lt;v-songbaohua@oppo.com&gt;
Cc: Suren Baghdasaryan &lt;surenb@google.com&gt;
Cc: Liam R. Howlett &lt;Liam.Howlett@oracle.com&gt;
Reviewed-by: Suren Baghdasaryan &lt;surenb@google.com&gt;
Signed-off-by: Carlos Llamas &lt;cmllamas@google.com&gt;
Link: https://lore.kernel.org/r/20241210143114.661252-3-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>Revert "binder: switch alloc-&gt;mutex to spinlock_t"</title>
<updated>2024-12-24T08:35:23Z</updated>
<author>
<name>Carlos Llamas</name>
<email>cmllamas@google.com</email>
</author>
<published>2024-12-10T14:30:57Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=8b52c7261e04f1325f47e2b173a9d4b980e8f858'/>
<id>urn:sha1:8b52c7261e04f1325f47e2b173a9d4b980e8f858</id>
<content type='text'>
This reverts commit 7710e2cca32e7f3958480e8bd44f50e29d0c2509.

In preparation for concurrent page installations, restore the original
alloc-&gt;mutex which will serialize zap_page_range_single() against page
installations in subsequent patches (instead of the mmap_sem).

Resolved trivial conflicts with commit 2c10a20f5e84a ("binder_alloc: Fix
sleeping function called from invalid context") and commit da0c02516c50
("mm/list_lru: simplify the list_lru walk callback function").

Cc: Mukesh Ojha &lt;quic_mojha@quicinc.com&gt;
Reviewed-by: Suren Baghdasaryan &lt;surenb@google.com&gt;
Signed-off-by: Carlos Llamas &lt;cmllamas@google.com&gt;
Link: https://lore.kernel.org/r/20241210143114.661252-2-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>mm/list_lru: simplify the list_lru walk callback function</title>
<updated>2024-11-12T01:22:26Z</updated>
<author>
<name>Kairui Song</name>
<email>kasong@tencent.com</email>
</author>
<published>2024-11-04T17:52:57Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=da0c02516c501b43bd39ad4aca5779c86153d143'/>
<id>urn:sha1:da0c02516c501b43bd39ad4aca5779c86153d143</id>
<content type='text'>
Now isolation no longer takes the list_lru global node lock, only use the
per-cgroup lock instead.  And this lock is inside the list_lru_one being
walked, no longer needed to pass the lock explicitly.

Link: https://lkml.kernel.org/r/20241104175257.60853-7-ryncsn@gmail.com
Signed-off-by: Kairui Song &lt;kasong@tencent.com&gt;
Cc: Chengming Zhou &lt;zhouchengming@bytedance.com&gt;
Cc: Johannes Weiner &lt;hannes@cmpxchg.org&gt;
Cc: Matthew Wilcox (Oracle) &lt;willy@infradead.org&gt;
Cc: Michal Hocko &lt;mhocko@suse.com&gt;
Cc: Muchun Song &lt;muchun.song@linux.dev&gt;
Cc: Qi Zheng &lt;zhengqi.arch@bytedance.com&gt;
Cc: Roman Gushchin &lt;roman.gushchin@linux.dev&gt;
Cc: Shakeel Butt &lt;shakeel.butt@linux.dev&gt;
Cc: Waiman Long &lt;longman@redhat.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
</entry>
</feed>
