<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/include/asm-generic/pgtable.h, branch v5.5</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v5.5</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v5.5'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2019-12-01T14:29:19Z</updated>
<entry>
<title>mm/memory.c: fix a huge pud insertion race during faulting</title>
<updated>2019-12-01T14:29:19Z</updated>
<author>
<name>Thomas Hellstrom</name>
<email>thellstrom@vmware.com</email>
</author>
<published>2019-12-01T01:51:32Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=625110b5e9dae9074d8a7e67dd07f821a053eed7'/>
<id>urn:sha1:625110b5e9dae9074d8a7e67dd07f821a053eed7</id>
<content type='text'>
A huge pud page can theoretically be faulted in racing with pmd_alloc()
in __handle_mm_fault().  That will lead to pmd_alloc() returning an
invalid pmd pointer.

Fix this by adding a pud_trans_unstable() function similar to
pmd_trans_unstable() and check whether the pud is really stable before
using the pmd pointer.

Race:
  Thread 1:             Thread 2:                 Comment
  create_huge_pud()                               Fallback - not taken.
                        create_huge_pud()         Taken.
  pmd_alloc()                                     Returns an invalid pointer.

This will result in user-visible huge page data corruption.

Note that this was caught during a code audit rather than a real
experienced problem.  It looks to me like the only implementation that
currently creates huge pud pagetable entries is dev_dax_huge_fault()
which doesn't appear to care much about private (COW) mappings or
write-tracking which is, I believe, a prerequisite for create_huge_pud()
falling back on thread 1, but not in thread 2.

Link: http://lkml.kernel.org/r/20191115115808.21181-2-thomas_os@shipmail.org
Fixes: a00cc7d9dd93 ("mm, x86: add support for PUD-sized transparent hugepages")
Signed-off-by: Thomas Hellstrom &lt;thellstrom@vmware.com&gt;
Acked-by: Kirill A. Shutemov &lt;kirill.shutemov@linux.intel.com&gt;
Cc: Arnd Bergmann &lt;arnd@arndb.de&gt;
Cc: Matthew Wilcox &lt;willy@infradead.org&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm: move the backup x_devmap() functions to asm-generic/pgtable.h</title>
<updated>2019-12-01T14:29:19Z</updated>
<author>
<name>Thomas Hellstrom</name>
<email>thellstrom@vmware.com</email>
</author>
<published>2019-12-01T01:51:29Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=bf1a12a8095615c9486f5463ca473d2d69ff6952'/>
<id>urn:sha1:bf1a12a8095615c9486f5463ca473d2d69ff6952</id>
<content type='text'>
The asm-generic/pgtable.h include file appears to be the correct place for
the backup x_devmap() inline functions.  Moving them here is also
necessary if we want to include x_devmap() in the [pmd|pud]_unstable
functions.  So move the x_devmap() functions to asm-generic/pgtable.h

Link: http://lkml.kernel.org/r/20191115115808.21181-1-thomas_os@shipmail.org
Signed-off-by: Thomas Hellstrom &lt;thellstrom@vmware.com&gt;
Cc: Arnd Bergmann &lt;arnd@arndb.de&gt;
Cc: "Kirill A. Shutemov" &lt;kirill.shutemov@linux.intel.com&gt;
Cc: Matthew Wilcox &lt;willy@infradead.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>asm-generic/mm: stub out p{4,u}d_clear_bad() if __PAGETABLE_P{4,U}D_FOLDED</title>
<updated>2019-12-01T14:29:19Z</updated>
<author>
<name>Vineet Gupta</name>
<email>Vineet.Gupta1@synopsys.com</email>
</author>
<published>2019-12-01T01:51:20Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=f2400abc782dc38a1fee9cfc13589d31f1a0404f'/>
<id>urn:sha1:f2400abc782dc38a1fee9cfc13589d31f1a0404f</id>
<content type='text'>
This came up when removing __ARCH_HAS_5LEVEL_HACK for ARC as code bloat.
With this patch we see the following code reduction.

| bloat-o-meter2 vmlinux-D-elide-p4d_free_tlb vmlinux-E-elide-p?d_clear_bad
| add/remove: 0/2 grow/shrink: 0/0 up/down: 0/-40 (-40)
| function                                     old     new   delta
| pud_clear_bad                                 20       -     -20
| p4d_clear_bad                                 20       -     -20
| Total: Before=4136930, After=4136890, chg -1.000000%

Link: http://lkml.kernel.org/r/20191016162400.14796-6-vgupta@synopsys.com
Signed-off-by: Vineet Gupta &lt;vgupta@synopsys.com&gt;
Acked-by: Kirill A. Shutemov &lt;kirill.shutemov@linux.intel.com&gt;
Cc: Arnd Bergmann &lt;arnd@arndb.de&gt;
Cc: Will Deacon &lt;will@kernel.org&gt;
Cc: "Aneesh Kumar K . V" &lt;aneesh.kumar@linux.ibm.com&gt;
Cc: Nick Piggin &lt;npiggin@gmail.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>thp: update split_huge_page_pmd() comment</title>
<updated>2019-09-24T22:54:10Z</updated>
<author>
<name>Kefeng Wang</name>
<email>wangkefeng.wang@huawei.com</email>
</author>
<published>2019-09-23T22:37:41Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=9ef258bad7af2f6abee2bdfaaa6c41890d4f4db6'/>
<id>urn:sha1:9ef258bad7af2f6abee2bdfaaa6c41890d4f4db6</id>
<content type='text'>
According to 78ddc5347341 ("thp: rename split_huge_page_pmd() to
split_huge_pmd()"), update related comment.

Link: http://lkml.kernel.org/r/20190731033406.185285-1-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang &lt;wangkefeng.wang@huawei.com&gt;
Cc: "Kirill A . Shutemov" &lt;kirill.shutemov@linux.intel.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm: consolidate pgtable_cache_init() and pgd_cache_init()</title>
<updated>2019-09-24T22:54:09Z</updated>
<author>
<name>Mike Rapoport</name>
<email>rppt@linux.ibm.com</email>
</author>
<published>2019-09-23T22:35:31Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=782de70c42930baae55234f3df0dc90774924447'/>
<id>urn:sha1:782de70c42930baae55234f3df0dc90774924447</id>
<content type='text'>
Both pgtable_cache_init() and pgd_cache_init() are used to initialize kmem
cache for page table allocations on several architectures that do not use
PAGE_SIZE tables for one or more levels of the page table hierarchy.

Most architectures do not implement these functions and use __weak default
NOP implementation of pgd_cache_init().  Since there is no such default
for pgtable_cache_init(), its empty stub is duplicated among most
architectures.

Rename the definitions of pgd_cache_init() to pgtable_cache_init() and
drop empty stubs of pgtable_cache_init().

Link: http://lkml.kernel.org/r/1566457046-22637-1-git-send-email-rppt@linux.ibm.com
Signed-off-by: Mike Rapoport &lt;rppt@linux.ibm.com&gt;
Acked-by: Will Deacon &lt;will@kernel.org&gt;		[arm64]
Acked-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;	[x86]
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: Ingo Molnar &lt;mingo@redhat.com&gt;
Cc: Borislav Petkov &lt;bp@alien8.de&gt;
Cc: Matthew Wilcox &lt;willy@infradead.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>x86/mm: Initialize PGD cache during mm initialization</title>
<updated>2019-05-05T18:32:46Z</updated>
<author>
<name>Nadav Amit</name>
<email>nadav.amit@gmail.com</email>
</author>
<published>2019-05-05T01:11:24Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=caa841360134f863987f2d4f77b8dc2fbb7596f8'/>
<id>urn:sha1:caa841360134f863987f2d4f77b8dc2fbb7596f8</id>
<content type='text'>
Poking-mm initialization might require to duplicate the PGD in early
stage. Initialize the PGD cache earlier to prevent boot failures.

Reported-by: kernel test robot &lt;lkp@intel.com&gt;
Signed-off-by: Nadav Amit &lt;namit@vmware.com&gt;
Cc: Andy Lutomirski &lt;luto@kernel.org&gt;
Cc: Borislav Petkov &lt;bp@alien8.de&gt;
Cc: Dave Hansen &lt;dave.hansen@linux.intel.com&gt;
Cc: H. Peter Anvin &lt;hpa@zytor.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Rick Edgecombe &lt;rick.p.edgecombe@intel.com&gt;
Cc: Rik van Riel &lt;riel@surriel.com&gt;
Cc: Stephen Rothwell &lt;sfr@canb.auug.org.au&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Fixes: 4fc19708b165 ("x86/alternatives: Initialize temporary mm for patching")
Link: http://lkml.kernel.org/r/20190505011124.39692-1-namit@vmware.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
<entry>
<title>mm: update ptep_modify_prot_commit to take old pte value as arg</title>
<updated>2019-03-06T05:07:18Z</updated>
<author>
<name>Aneesh Kumar K.V</name>
<email>aneesh.kumar@linux.ibm.com</email>
</author>
<published>2019-03-05T23:46:29Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=04a8645304500be88b3345b65fef7efe58016166'/>
<id>urn:sha1:04a8645304500be88b3345b65fef7efe58016166</id>
<content type='text'>
Architectures like ppc64 require to do a conditional tlb flush based on
the old and new value of pte.  Enable that by passing old pte value as
the arg.

Link: http://lkml.kernel.org/r/20190116085035.29729-3-aneesh.kumar@linux.ibm.com
Signed-off-by: Aneesh Kumar K.V &lt;aneesh.kumar@linux.ibm.com&gt;
Cc: Benjamin Herrenschmidt &lt;benh@kernel.crashing.org&gt;
Cc: Heiko Carstens &lt;heiko.carstens@de.ibm.com&gt;
Cc: "H. Peter Anvin" &lt;hpa@zytor.com&gt;
Cc: Ingo Molnar &lt;mingo@elte.hu&gt;
Cc: Martin Schwidefsky &lt;schwidefsky@de.ibm.com&gt;
Cc: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Cc: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Cc: Paul Mackerras &lt;paulus@samba.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm: update ptep_modify_prot_start/commit to take vm_area_struct as arg</title>
<updated>2019-03-06T05:07:18Z</updated>
<author>
<name>Aneesh Kumar K.V</name>
<email>aneesh.kumar@linux.ibm.com</email>
</author>
<published>2019-03-05T23:46:26Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=0cbe3e26abe0cfe7effb67f620a77d46cce628b2'/>
<id>urn:sha1:0cbe3e26abe0cfe7effb67f620a77d46cce628b2</id>
<content type='text'>
Patch series "NestMMU pte upgrade workaround for mprotect", v5.

We can upgrade pte access (R -&gt; RW transition) via mprotect.  We need to
make sure we follow the recommended pte update sequence as outlined in
commit bd5050e38aec ("powerpc/mm/radix: Change pte relax sequence to
handle nest MMU hang") for such updates.  This patch series does that.

This patch (of 5):

Some architectures may want to call flush_tlb_range from these helpers.

Link: http://lkml.kernel.org/r/20190116085035.29729-2-aneesh.kumar@linux.ibm.com
Signed-off-by: Aneesh Kumar K.V &lt;aneesh.kumar@linux.ibm.com&gt;
Cc: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Cc: Benjamin Herrenschmidt &lt;benh@kernel.crashing.org&gt;
Cc: Paul Mackerras &lt;paulus@samba.org&gt;
Cc: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Cc: Heiko Carstens &lt;heiko.carstens@de.ibm.com&gt;
Cc: Martin Schwidefsky &lt;schwidefsky@de.ibm.com&gt;
Cc: Ingo Molnar &lt;mingo@elte.hu&gt;
Cc: "H. Peter Anvin" &lt;hpa@zytor.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>lib/ioremap: ensure break-before-make is used for huge p4d mappings</title>
<updated>2018-12-28T20:11:50Z</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2018-12-28T08:37:53Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=8e2d43405b22e98cf5f3730c1829ec1fdbe17ae7'/>
<id>urn:sha1:8e2d43405b22e98cf5f3730c1829ec1fdbe17ae7</id>
<content type='text'>
Whilst no architectures actually enable support for huge p4d mappings in
the vmap area, the code that is implemented should be using
break-before-make, as we do for pud and pmd huge entries.

Link: http://lkml.kernel.org/r/1544120495-17438-6-git-send-email-will.deacon@arm.com
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Reviewed-by: Toshi Kani &lt;toshi.kani@hpe.com&gt;
Cc: Chintan Pandya &lt;cpandya@codeaurora.org&gt;
Cc: Toshi Kani &lt;toshi.kani@hpe.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Michal Hocko &lt;mhocko@suse.com&gt;
Cc: "H. Peter Anvin" &lt;hpa@zytor.com&gt;
Cc: Ingo Molnar &lt;mingo@elte.hu&gt;
Cc: Sean Christopherson &lt;sean.j.christopherson@intel.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>generic/pgtable: Introduce set_pte_safe()</title>
<updated>2018-12-05T08:03:06Z</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2018-12-04T21:37:16Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=4369deaa2f022ef92da45a0e7eec8a4a52e8e8a4'/>
<id>urn:sha1:4369deaa2f022ef92da45a0e7eec8a4a52e8e8a4</id>
<content type='text'>
Commit:

  f77084d96355 "x86/mm/pat: Disable preemption around __flush_tlb_all()"

introduced a warning to capture cases __flush_tlb_all() is called without
pre-emption disabled. It triggers a false positive warning in the memory
hotplug path.

On investigation it was found that the __flush_tlb_all() calls are not
necessary. However, they are only "not necessary" in practice provided
the ptes are being initially populated from the !present state.

Introduce set_pte_safe() as a sanity check that the pte is being updated
in a way that does not require a TLB flush.

Forgive the macro, the availability of the various of set_pte() levels
is hit and miss across architectures.

[ mingo: Minor readability edits. ]

Suggested-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Suggested-by: Dave Hansen &lt;dave.hansen@intel.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Acked-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Acked-by: Kirill A. Shutemov &lt;kirill.shutemov@linux.intel.com&gt;
Cc: Andy Lutomirski &lt;luto@kernel.org&gt;
Cc: Borislav Petkov &lt;bp@alien8.de&gt;
Cc: Dave Hansen &lt;dave.hansen@linux.intel.com&gt;
Cc: H. Peter Anvin &lt;hpa@zytor.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Rik van Riel &lt;riel@surriel.com&gt;
Cc: Sebastian Andrzej Siewior &lt;bigeasy@linutronix.de&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Link: https://lkml.kernel.org/r/279dadae-9148-465c-7ec6-3f37e026c6c9@intel.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
</entry>
</feed>
