File 5a956747-x86-HVM-dont-give-wrong-impression-of-WRMSR-success.patch of Package xen
References: bsc#1072834
# Commit 1f1d183d49008794b087cf043fc77f724a45af98
# Date 2018-02-27 15:12:23 +0100
# Author Jan Beulich <jbeulich@suse.com>
# Committer Jan Beulich <jbeulich@suse.com>
x86/HVM: don't give the wrong impression of WRMSR succeeding
... for non-existent MSRs: wrmsr_hypervisor_regs()'s comment clearly
says that the function returns 0 for unrecognized MSRs, so
{svm,vmx}_msr_write_intercept() should not convert this into success. We
don't want to unconditionally fail the access though, as we can't be
certain the list of handled MSRs is complete enough for the guest types
we care about, so instead mirror what we do on the read paths and probe
the MSR to decide whether to raise #GP.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
# Commit 59c0983e10d70ea2368085271b75fb007811fe52
# Date 2018-03-15 12:44:24 +0100
# Author Jan Beulich <jbeulich@suse.com>
# Committer Jan Beulich <jbeulich@suse.com>
x86: ignore guest microcode loading attempts
The respective MSRs are write-only, and hence attempts by guests to
write to these are - as of 1f1d183d49 ("x86/HVM: don't give the wrong
impression of WRMSR succeeding") no longer ignored. Restore original
behavior for the two affected MSRs.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2106,6 +2106,13 @@ static int svm_msr_write_intercept(unsig
result = X86EMUL_RETRY;
break;
case 0:
+ /*
+ * Match up with the RDMSR side for now; ultimately this entire
+ * case block should go away.
+ */
+ if ( rdmsr_safe(msr, msr_content) == 0 )
+ break;
+ goto gpf;
case 1:
break;
default:
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3182,6 +3182,13 @@ static int vmx_msr_write_intercept(unsig
case -ERESTART:
return X86EMUL_RETRY;
case 0:
+ /*
+ * Match up with the RDMSR side for now; ultimately this
+ * entire case block should go away.
+ */
+ if ( rdmsr_safe(msr, msr_content) == 0 )
+ break;
+ goto gp_fault;
case 1:
break;
default:
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -128,6 +128,8 @@ int guest_rdmsr(const struct vcpu *v, ui
switch ( msr )
{
+ case MSR_AMD_PATCHLOADER:
+ case MSR_IA32_UCODE_WRITE:
case MSR_PRED_CMD:
/* Write-only */
goto gp_fault;
@@ -181,6 +183,28 @@ int guest_wrmsr(struct vcpu *v, uint32_t
/* Read-only */
goto gp_fault;
+ case MSR_AMD_PATCHLOADER:
+ /*
+ * See note on MSR_IA32_UCODE_WRITE below, which may or may not apply
+ * to AMD CPUs as well (at least the architectural/CPUID part does).
+ */
+ if ( is_pv_domain(d) ||
+ d->arch.cpuid->x86_vendor != X86_VENDOR_AMD )
+ goto gp_fault;
+ break;
+
+ case MSR_IA32_UCODE_WRITE:
+ /*
+ * Some versions of Windows at least on certain hardware try to load
+ * microcode before setting up an IDT. Therefore we must not inject #GP
+ * for such attempts. Also the MSR is architectural and not qualified
+ * by any CPUID bit.
+ */
+ if ( is_pv_domain(d) ||
+ d->arch.cpuid->x86_vendor != X86_VENDOR_INTEL )
+ goto gp_fault;
+ break;
+
case MSR_SPEC_CTRL:
if ( !cp->feat.ibrsb )
goto gp_fault; /* MSR available? */