aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86/mm
diff options
context:
space:
mode:
authorArd Biesheuvel <ardb@kernel.org>2025-08-28 12:22:09 +0200
committerBorislav Petkov (AMD) <bp@alien8.de>2025-09-03 17:58:15 +0200
commita5f03880f06a6da6ea5f1d966fffffcb3fc65462 (patch)
treec74cbddebdc916a9203060d6f3d824e83d09b99a /arch/x86/mm
parentx86/sev: Move GHCB page based HV communication out of startup code (diff)
downloadlinux-a5f03880f06a6da6ea5f1d966fffffcb3fc65462.tar.gz
linux-a5f03880f06a6da6ea5f1d966fffffcb3fc65462.zip
x86/sev: Avoid global variable to store virtual address of SVSM area
The boottime SVSM calling area is used both by the startup code running from a 1:1 mapping, and potentially later on running from the ordinary kernel mapping. This SVSM calling area is statically allocated, and so its physical address doesn't change. However, its virtual address depends on the calling context (1:1 mapping or kernel virtual mapping), and even though the variable that holds the virtual address of this calling area gets updated from 1:1 address to kernel address during the boot, it is hard to reason about why this is guaranteed to be safe. So instead, take the RIP-relative address of the boottime SVSM calling area whenever its virtual address is required, and only use a global variable for the physical address. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/20250828102202.1849035-30-ardb+git@google.com
Diffstat (limited to 'arch/x86/mm')
-rw-r--r--arch/x86/mm/mem_encrypt_amd.c6
1 files changed, 0 insertions, 6 deletions
diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index faf3a13fb6ba..2f8c32173972 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -536,12 +536,6 @@ void __init sme_early_init(void)
x86_init.resources.dmi_setup = snp_dmi_setup;
}
- /*
- * Switch the SVSM CA mapping (if active) from identity mapped to
- * kernel mapped.
- */
- snp_update_svsm_ca();
-
if (sev_status & MSR_AMD64_SNP_SECURE_TSC)
setup_force_cpu_cap(X86_FEATURE_TSC_RELIABLE);
}