File 5899cbd9-EPT-allow-wrcomb-MMIO-mappings-again.patch of Package xen.7317
# Commit 30921dc2df3665ca1b2593595aa6725ff013d386
# Date 2017-02-07 14:30:01 +0100
# Author David Woodhouse <dwmw@amazon.com>
# Committer Jan Beulich <jbeulich@suse.com>
x86/ept: allow write-combining on !mfn_valid() MMIO mappings again
For some MMIO regions, such as those high above RAM, mfn_valid() will
return false.
Since the fix for XSA-154 in commit c61a6f74f80e ("x86: enforce
consistent cachability of MMIO mappings"), guests have no longer been
able to use PAT to obtain write-combining on such regions because the
'ignore PAT' bit is set in EPT.
We probably want to err on the side of caution and preserve that
behaviour for addresses in mmio_ro_ranges, but not for normal MMIO
mappings. That necessitates a slight refactoring to check mfn_valid()
later, and let the MMIO case get through to the right code path.
Since we're not bailing out for !mfn_valid() immediately, the range
checks need to be adjusted to cope simply by masking in the low bits
to account for 'order' instead of adding, to avoid overflow when the mfn
is INVALID_MFN (which happens on unmap, since we carefully call this
function to fill in the EMT even though the PTE won't be valid).
The range checks are also slightly refactored to put only one of them in
the fast path in the common case. If it doesn't overlap, then it
*definitely* isn't contained, so we don't need both checks. And if it
overlaps and is only one page, then it definitely *is* contained.
Finally, add a comment clarifying how that 'return -1' works it isn't
returning an error and causing the mapping to fail; it relies on
resolve_misconfig() being able to split the mapping later. So it's
*only* sane to do it where order>0 and the 'problem' will be solved by
splitting the large page. Not for blindly returning 'error', which I was
tempted to do in my first attempt.
Signed-off-by: David Woodhouse <dwmw@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -656,8 +656,7 @@ uint8_t epte_get_entry_emt(struct domain
if ( v->domain != d )
v = d->vcpu ? d->vcpu[0] : NULL;
- if ( !mfn_valid(mfn_x(mfn)) ||
- rangeset_contains_singleton(mmio_ro_ranges, mfn_x(mfn)) )
+ if ( rangeset_contains_singleton(mmio_ro_ranges, mfn_x(mfn)) )
{
*ipat = 1;
return MTRR_TYPE_UNCACHABLE;
@@ -666,27 +665,27 @@ uint8_t epte_get_entry_emt(struct domain
if ( hvm_get_mem_pinned_cacheattr(d, gfn, &type) )
return type;
- if ( !iommu_enabled ||
- (rangeset_is_empty(d->iomem_caps) &&
- rangeset_is_empty(d->arch.ioport_caps) &&
- !has_arch_pdevs(d)) )
+ if ( direct_mmio )
{
- ASSERT(!direct_mmio ||
- mfn_x(mfn) == d->arch.hvm_domain.vmx.apic_access_mfn);
+ if ( mfn_x(mfn) != d->arch.hvm_domain.vmx.apic_access_mfn )
+ return MTRR_TYPE_UNCACHABLE;
*ipat = 1;
return MTRR_TYPE_WRBACK;
}
- if ( direct_mmio )
+ if ( !mfn_valid(mfn_x(mfn)) )
{
- if ( mfn_x(mfn) != d->arch.hvm_domain.vmx.apic_access_mfn )
- return MTRR_TYPE_UNCACHABLE;
*ipat = 1;
- return MTRR_TYPE_WRBACK;
+ return MTRR_TYPE_UNCACHABLE;
}
- if ( iommu_snoop )
+ if ( !iommu_enabled || iommu_snoop ||
+ (rangeset_is_empty(d->iomem_caps) &&
+ rangeset_is_empty(d->arch.ioport_caps) &&
+ !has_arch_pdevs(d)) )
{
+ ASSERT(!direct_mmio ||
+ mfn_x(mfn) == d->arch.hvm_domain.vmx.apic_access_mfn);
*ipat = 1;
return MTRR_TYPE_WRBACK;
}