<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/include/asm-generic/pgtable.h, branch v4.19</title>
<subtitle>Mirror of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
</subtitle>
<id>https://git.shady.money/linux/atom?h=v4.19</id>
<link rel='self' href='https://git.shady.money/linux/atom?h=v4.19'/>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/'/>
<updated>2018-08-17T23:20:29Z</updated>
<entry>
<title>mm: provide a fallback for PAGE_KERNEL_EXEC for architectures</title>
<updated>2018-08-17T23:20:29Z</updated>
<author>
<name>Luis R. Rodriguez</name>
<email>mcgrof@kernel.org</email>
</author>
<published>2018-08-17T22:46:32Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=1a9b4b3d75679fbe8c3bb8fb7e957ea693b6a89c'/>
<id>urn:sha1:1a9b4b3d75679fbe8c3bb8fb7e957ea693b6a89c</id>
<content type='text'>
Some architectures just don't have PAGE_KERNEL_EXEC.  The mm/nommu.c and
mm/vmalloc.c code have been using PAGE_KERNEL as a fallback for years.
Move this fallback to asm-generic.

Link: http://lkml.kernel.org/r/20180510185507.2439-3-mcgrof@kernel.org
Signed-off-by: Luis R. Rodriguez &lt;mcgrof@kernel.org&gt;
Suggested-by: Matthew Wilcox &lt;willy@infradead.org&gt;
Reviewed-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Arnd Bergmann &lt;arnd@arndb.de&gt;
Cc: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
Cc: Geert Uytterhoeven &lt;geert@linux-m68k.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm: provide a fallback for PAGE_KERNEL_RO for architectures</title>
<updated>2018-08-17T23:20:29Z</updated>
<author>
<name>Luis R. Rodriguez</name>
<email>mcgrof@kernel.org</email>
</author>
<published>2018-08-17T22:46:29Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=a3266bd49c721e2e0a71f352d83713fbd60caadb'/>
<id>urn:sha1:a3266bd49c721e2e0a71f352d83713fbd60caadb</id>
<content type='text'>
Some architectures do not define certain PAGE_KERNEL_* flags, this is
either because:

 a) The way to implement some of these flags is *not yet ported*, or
 b) The architecture *has no way* to describe them

Over time we have accumulated a few PAGE_KERNEL_* fallback workarounds
for architectures in the kernel which do not define them using
*relatively safe* equivalents.  Move these scattered fallback hacks into
asm-generic.

We start off with PAGE_KERNEL_RO using PAGE_KERNEL as a fallback.  This
has been in place on the firmware loader for years.  Move the fallback
into the respective asm-generic header.

Link: http://lkml.kernel.org/r/20180510185507.2439-2-mcgrof@kernel.org
Signed-off-by: Luis R. Rodriguez &lt;mcgrof@kernel.org&gt;
Reviewed-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Arnd Bergmann &lt;arnd@arndb.de&gt;
Cc: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
Cc: Matthew Wilcox &lt;willy@infradead.org&gt;
Cc: Geert Uytterhoeven &lt;geert@linux-m68k.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>Merge branch 'l1tf-final' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip</title>
<updated>2018-08-14T16:46:06Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2018-08-14T16:46:06Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=958f338e96f874a0d29442396d6adf9c1e17aa2d'/>
<id>urn:sha1:958f338e96f874a0d29442396d6adf9c1e17aa2d</id>
<content type='text'>
Merge L1 Terminal Fault fixes from Thomas Gleixner:
 "L1TF, aka L1 Terminal Fault, is yet another speculative hardware
  engineering trainwreck. It's a hardware vulnerability which allows
  unprivileged speculative access to data which is available in the
  Level 1 Data Cache when the page table entry controlling the virtual
  address, which is used for the access, has the Present bit cleared or
  other reserved bits set.

  If an instruction accesses a virtual address for which the relevant
  page table entry (PTE) has the Present bit cleared or other reserved
  bits set, then speculative execution ignores the invalid PTE and loads
  the referenced data if it is present in the Level 1 Data Cache, as if
  the page referenced by the address bits in the PTE was still present
  and accessible.

  While this is a purely speculative mechanism and the instruction will
  raise a page fault when it is retired eventually, the pure act of
  loading the data and making it available to other speculative
  instructions opens up the opportunity for side channel attacks to
  unprivileged malicious code, similar to the Meltdown attack.

  While Meltdown breaks the user space to kernel space protection, L1TF
  allows to attack any physical memory address in the system and the
  attack works across all protection domains. It allows an attack of SGX
  and also works from inside virtual machines because the speculation
  bypasses the extended page table (EPT) protection mechanism.

  The assoicated CVEs are: CVE-2018-3615, CVE-2018-3620, CVE-2018-3646

  The mitigations provided by this pull request include:

   - Host side protection by inverting the upper address bits of a non
     present page table entry so the entry points to uncacheable memory.

   - Hypervisor protection by flushing L1 Data Cache on VMENTER.

   - SMT (HyperThreading) control knobs, which allow to 'turn off' SMT
     by offlining the sibling CPU threads. The knobs are available on
     the kernel command line and at runtime via sysfs

   - Control knobs for the hypervisor mitigation, related to L1D flush
     and SMT control. The knobs are available on the kernel command line
     and at runtime via sysfs

   - Extensive documentation about L1TF including various degrees of
     mitigations.

  Thanks to all people who have contributed to this in various ways -
  patches, review, testing, backporting - and the fruitful, sometimes
  heated, but at the end constructive discussions.

  There is work in progress to provide other forms of mitigations, which
  might be less horrible performance wise for a particular kind of
  workloads, but this is not yet ready for consumption due to their
  complexity and limitations"

* 'l1tf-final' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (75 commits)
  x86/microcode: Allow late microcode loading with SMT disabled
  tools headers: Synchronise x86 cpufeatures.h for L1TF additions
  x86/mm/kmmio: Make the tracer robust against L1TF
  x86/mm/pat: Make set_memory_np() L1TF safe
  x86/speculation/l1tf: Make pmd/pud_mknotpresent() invert
  x86/speculation/l1tf: Invert all not present mappings
  cpu/hotplug: Fix SMT supported evaluation
  KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry
  x86/speculation: Use ARCH_CAPABILITIES to skip L1D flush on vmentry
  x86/speculation: Simplify sysfs report of VMX L1TF vulnerability
  Documentation/l1tf: Remove Yonah processors from not vulnerable list
  x86/KVM/VMX: Don't set l1tf_flush_l1d from vmx_handle_external_intr()
  x86/irq: Let interrupt handlers set kvm_cpu_l1tf_flush_l1d
  x86: Don't include linux/irq.h from asm/hardirq.h
  x86/KVM/VMX: Introduce per-host-cpu analogue of l1tf_flush_l1d
  x86/irq: Demote irq_cpustat_t::__softirq_pending to u16
  x86/KVM/VMX: Move the l1tf_flush_l1d test to vmx_l1d_flush()
  x86/KVM/VMX: Replace 'vmx_l1d_flush_always' with 'vmx_l1d_flush_cond'
  x86/KVM/VMX: Don't set l1tf_flush_l1d to true from vmx_l1d_flush()
  cpu/hotplug: detect SMT disabled by BIOS
  ...
</content>
</entry>
<entry>
<title>x86/speculation/l1tf: Unbreak !__HAVE_ARCH_PFN_MODIFY_ALLOWED architectures</title>
<updated>2018-07-15T09:29:26Z</updated>
<author>
<name>Jiri Kosina</name>
<email>jkosina@suse.cz</email>
</author>
<published>2018-07-14T19:56:13Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=6c26fcd2abfe0a56bbd95271fce02df2896cfd24'/>
<id>urn:sha1:6c26fcd2abfe0a56bbd95271fce02df2896cfd24</id>
<content type='text'>
pfn_modify_allowed() and arch_has_pfn_modify_check() are outside of the 
!__ASSEMBLY__ section in include/asm-generic/pgtable.h, which confuses 
assembler on archs that don't have __HAVE_ARCH_PFN_MODIFY_ALLOWED (e.g. 
ia64) and breaks build:

    include/asm-generic/pgtable.h: Assembler messages:
    include/asm-generic/pgtable.h:538: Error: Unknown opcode `static inline bool pfn_modify_allowed(unsigned long pfn,pgprot_t prot)'
    include/asm-generic/pgtable.h:540: Error: Unknown opcode `return true'
    include/asm-generic/pgtable.h:543: Error: Unknown opcode `static inline bool arch_has_pfn_modify_check(void)'
    include/asm-generic/pgtable.h:545: Error: Unknown opcode `return false'
    arch/ia64/kernel/entry.S:69: Error: `mov' does not fit into bundle

Move those two static inlines into the !__ASSEMBLY__ section so that they 
don't confuse the asm build pass.

Fixes: 42e4089c7890 ("x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings")
Signed-off-by: Jiri Kosina &lt;jkosina@suse.cz&gt;
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
</content>
</entry>
<entry>
<title>ioremap: Update pgtable free interfaces with addr</title>
<updated>2018-07-04T19:37:08Z</updated>
<author>
<name>Chintan Pandya</name>
<email>cpandya@codeaurora.org</email>
</author>
<published>2018-06-27T14:13:47Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=785a19f9d1dd8a4ab2d0633be4656653bd3de1fc'/>
<id>urn:sha1:785a19f9d1dd8a4ab2d0633be4656653bd3de1fc</id>
<content type='text'>
The following kernel panic was observed on ARM64 platform due to a stale
TLB entry.

 1. ioremap with 4K size, a valid pte page table is set.
 2. iounmap it, its pte entry is set to 0.
 3. ioremap the same address with 2M size, update its pmd entry with
    a new value.
 4. CPU may hit an exception because the old pmd entry is still in TLB,
    which leads to a kernel panic.

Commit b6bdb7517c3d ("mm/vmalloc: add interfaces to free unmapped page
table") has addressed this panic by falling to pte mappings in the above
case on ARM64.

To support pmd mappings in all cases, TLB purge needs to be performed
in this case on ARM64.

Add a new arg, 'addr', to pud_free_pmd_page() and pmd_free_pte_page()
so that TLB purge can be added later in seprate patches.

[toshi.kani@hpe.com: merge changes, rewrite patch description]
Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
Signed-off-by: Chintan Pandya &lt;cpandya@codeaurora.org&gt;
Signed-off-by: Toshi Kani &lt;toshi.kani@hpe.com&gt;
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: mhocko@suse.com
Cc: akpm@linux-foundation.org
Cc: hpa@zytor.com
Cc: linux-mm@kvack.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Joerg Roedel &lt;joro@8bytes.org&gt;
Cc: stable@vger.kernel.org
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Michal Hocko &lt;mhocko@suse.com&gt;
Cc: "H. Peter Anvin" &lt;hpa@zytor.com&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Link: https://lkml.kernel.org/r/20180627141348.21777-3-toshi.kani@hpe.com

</content>
</entry>
<entry>
<title>x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings</title>
<updated>2018-06-20T17:10:01Z</updated>
<author>
<name>Andi Kleen</name>
<email>ak@linux.intel.com</email>
</author>
<published>2018-06-13T22:48:27Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=42e4089c7890725fcd329999252dc489b72f2921'/>
<id>urn:sha1:42e4089c7890725fcd329999252dc489b72f2921</id>
<content type='text'>
For L1TF PROT_NONE mappings are protected by inverting the PFN in the page
table entry. This sets the high bits in the CPU's address space, thus
making sure to point to not point an unmapped entry to valid cached memory.

Some server system BIOSes put the MMIO mappings high up in the physical
address space. If such an high mapping was mapped to unprivileged users
they could attack low memory by setting such a mapping to PROT_NONE. This
could happen through a special device driver which is not access
protected. Normal /dev/mem is of course access protected.

To avoid this forbid PROT_NONE mappings or mprotect for high MMIO mappings.

Valid page mappings are allowed because the system is then unsafe anyways.

It's not expected that users commonly use PROT_NONE on MMIO. But to
minimize any impact this is only enforced if the mapping actually refers to
a high MMIO address (defined as the MAX_PA-1 bit being set), and also skip
the check for root.

For mmaps this is straight forward and can be handled in vm_insert_pfn and
in remap_pfn_range().

For mprotect it's a bit trickier. At the point where the actual PTEs are
accessed a lot of state has been changed and it would be difficult to undo
on an error. Since this is a uncommon case use a separate early page talk
walk pass for MMIO PROT_NONE mappings that checks for this condition
early. For non MMIO and non PROT_NONE there are no changes.

Signed-off-by: Andi Kleen &lt;ak@linux.intel.com&gt;
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Reviewed-by: Josh Poimboeuf &lt;jpoimboe@redhat.com&gt;
Acked-by: Dave Hansen &lt;dave.hansen@intel.com&gt;


</content>
</entry>
<entry>
<title>Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next</title>
<updated>2018-04-03T21:08:58Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2018-04-03T21:08:58Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=4608f064532c28c0ea3c03fe26a3a5909852811a'/>
<id>urn:sha1:4608f064532c28c0ea3c03fe26a3a5909852811a</id>
<content type='text'>
Pull sparc updates from David Miller:

 1) Add support for ADI (Application Data Integrity) found in more
    recent sparc64 cpus. Essentially this is keyed based access to
    virtual memory, and if the key encoded in the virual address is
    wrong you get a trap.

    The mm changes were reviewed by Andrew Morton and others.

    Work by Khalid Aziz.

 2) Validate DAX completion index range properly, from Rob Gardner.

 3) Add proper Kconfig deps for DAX driver. From Guenter Roeck.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next:
  sparc64: Make atomic_xchg() an inline function rather than a macro.
  sparc64: Properly range check DAX completion index
  sparc: Make auxiliary vectors for ADI available on 32-bit as well
  sparc64: Oracle DAX driver depends on SPARC64
  sparc64: Update signal delivery to use new helper functions
  sparc64: Add support for ADI (Application Data Integrity)
  mm: Allow arch code to override copy_highpage()
  mm: Clear arch specific VM flags on protection change
  mm: Add address parameter to arch_validate_prot()
  sparc64: Add auxiliary vectors to report platform ADI properties
  sparc64: Add handler for "Memory Corruption Detected" trap
  sparc64: Add HV fault type handlers for ADI related faults
  sparc64: Add support for ADI register fields, ASIs and traps
  mm, swap: Add infrastructure for saving page metadata on swap
  signals, sparc: Add signal codes for ADI violations
</content>
</entry>
<entry>
<title>mm/vmalloc: add interfaces to free unmapped page table</title>
<updated>2018-03-23T00:07:01Z</updated>
<author>
<name>Toshi Kani</name>
<email>toshi.kani@hpe.com</email>
</author>
<published>2018-03-22T23:17:20Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=b6bdb7517c3d3f41f20e5c2948d6bc3f8897394e'/>
<id>urn:sha1:b6bdb7517c3d3f41f20e5c2948d6bc3f8897394e</id>
<content type='text'>
On architectures with CONFIG_HAVE_ARCH_HUGE_VMAP set, ioremap() may
create pud/pmd mappings.  A kernel panic was observed on arm64 systems
with Cortex-A75 in the following steps as described by Hanjun Guo.

 1. ioremap a 4K size, valid page table will build,
 2. iounmap it, pte0 will set to 0;
 3. ioremap the same address with 2M size, pgd/pmd is unchanged,
    then set the a new value for pmd;
 4. pte0 is leaked;
 5. CPU may meet exception because the old pmd is still in TLB,
    which will lead to kernel panic.

This panic is not reproducible on x86.  INVLPG, called from iounmap,
purges all levels of entries associated with purged address on x86.  x86
still has memory leak.

The patch changes the ioremap path to free unmapped page table(s) since
doing so in the unmap path has the following issues:

 - The iounmap() path is shared with vunmap(). Since vmap() only
   supports pte mappings, making vunmap() to free a pte page is an
   overhead for regular vmap users as they do not need a pte page freed
   up.

 - Checking if all entries in a pte page are cleared in the unmap path
   is racy, and serializing this check is expensive.

 - The unmap path calls free_vmap_area_noflush() to do lazy TLB purges.
   Clearing a pud/pmd entry before the lazy TLB purges needs extra TLB
   purge.

Add two interfaces, pud_free_pmd_page() and pmd_free_pte_page(), which
clear a given pud/pmd entry and free up a page for the lower level
entries.

This patch implements their stub functions on x86 and arm64, which work
as workaround.

[akpm@linux-foundation.org: fix typo in pmd_free_pte_page() stub]
Link: http://lkml.kernel.org/r/20180314180155.19492-2-toshi.kani@hpe.com
Fixes: e61ce6ade404e ("mm: change ioremap to set up huge I/O mappings")
Reported-by: Lei Li &lt;lious.lilei@hisilicon.com&gt;
Signed-off-by: Toshi Kani &lt;toshi.kani@hpe.com&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: Wang Xuefeng &lt;wxf.wang@hisilicon.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Hanjun Guo &lt;guohanjun@huawei.com&gt;
Cc: Michal Hocko &lt;mhocko@suse.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Ingo Molnar &lt;mingo@redhat.com&gt;
Cc: "H. Peter Anvin" &lt;hpa@zytor.com&gt;
Cc: Borislav Petkov &lt;bp@suse.de&gt;
Cc: Matthew Wilcox &lt;willy@infradead.org&gt;
Cc: Chintan Pandya &lt;cpandya@codeaurora.org&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>mm, swap: Add infrastructure for saving page metadata on swap</title>
<updated>2018-03-18T14:38:45Z</updated>
<author>
<name>Khalid Aziz</name>
<email>khalid.aziz@oracle.com</email>
</author>
<published>2018-02-21T17:15:44Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=ca827d55ebaa24de9fca36ee24e42d6fc5119ee3'/>
<id>urn:sha1:ca827d55ebaa24de9fca36ee24e42d6fc5119ee3</id>
<content type='text'>
If a processor supports special metadata for a page, for example ADI
version tags on SPARC M7, this metadata must be saved when the page is
swapped out. The same metadata must be restored when the page is swapped
back in. This patch adds two new architecture specific functions -
arch_do_swap_page() to be called when a page is swapped in, and
arch_unmap_one() to be called when a page is being unmapped for swap
out. These architecture hooks allow page metadata to be saved if the
architecture supports it.

Signed-off-by: Khalid Aziz &lt;khalid.aziz@oracle.com&gt;
Cc: Khalid Aziz &lt;khalid@gonehiking.org&gt;
Acked-by: Jerome Marchand &lt;jmarchan@redhat.com&gt;
Reviewed-by: Anthony Yznaga &lt;anthony.yznaga@oracle.com&gt;
Acked-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>mm/thp: remove pmd_huge_split_prepare()</title>
<updated>2018-02-01T01:18:38Z</updated>
<author>
<name>Aneesh Kumar K.V</name>
<email>aneesh.kumar@linux.vnet.ibm.com</email>
</author>
<published>2018-02-01T00:18:24Z</published>
<link rel='alternate' type='text/html' href='https://git.shady.money/linux/commit/?id=423ac9af3ceff967a77b0714781033629593b077'/>
<id>urn:sha1:423ac9af3ceff967a77b0714781033629593b077</id>
<content type='text'>
Instead of marking the pmd ready for split, invalidate the pmd.  This
should take care of powerpc requirement.  Only side effect is that we
mark the pmd invalid early.  This can result in us blocking access to
the page a bit longer if we race against a thp split.

[kirill.shutemov@linux.intel.com: rebased, dirty THP once]
Link: http://lkml.kernel.org/r/20171213105756.69879-13-kirill.shutemov@linux.intel.com
Signed-off-by: Aneesh Kumar K.V &lt;aneesh.kumar@linux.vnet.ibm.com&gt;
Signed-off-by: Kirill A. Shutemov &lt;kirill.shutemov@linux.intel.com&gt;
Cc: Andrea Arcangeli &lt;aarcange@redhat.com&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: David Daney &lt;david.daney@cavium.com&gt;
Cc: David Miller &lt;davem@davemloft.net&gt;
Cc: H. Peter Anvin &lt;hpa@zytor.com&gt;
Cc: Hugh Dickins &lt;hughd@google.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Martin Schwidefsky &lt;schwidefsky@de.ibm.com&gt;
Cc: Michal Hocko &lt;mhocko@kernel.org&gt;
Cc: Nitin Gupta &lt;nitin.m.gupta@oracle.com&gt;
Cc: Ralf Baechle &lt;ralf@linux-mips.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Cc: Vineet Gupta &lt;vgupta@synopsys.com&gt;
Cc: Vlastimil Babka &lt;vbabka@suse.cz&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
</feed>
