File xsa379.patch of Package xen.25143
x86/mm: widen locked region in xenmem_add_to_physmap_one()
For pages which can be made part of the P2M by the guest, but which can
also later be de-allocated (grant table v2 status pages being the
present example), it is imperative that they be mapped at no more than a
single GFN. We therefore need to make sure that of two parallel
XENMAPSPACE_grant_table requests for the same status page one completes
before the second checks at which other GFN the underlying MFN is
presently mapped.
Push down the respective put_gfn(). This leverages that gfn_lock()
really aliases p2m_lock(), but the function makes this assumption
already anyway: In the XENMAPSPACE_gmfn case lock nesting constraints
for both involved GFNs would otherwise need to be enforced to avoid ABBA
deadlocks.
This is XSA-379.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5715,8 +5715,20 @@ int xenmem_add_to_physmap_one(
return -EINVAL;
}
- /* Remove previously mapped page if it was present. */
+ /*
+ * Note that we're (ab)using GFN locking (to really be locking of the
+ * entire P2M) here in (at least) two ways: Finer grained locking would
+ * expose lock oder violations in the XENMAPSPACE_gmfn case (due to the
+ * earlier get_gfn_unshare() above). Plus at the very least for the grant
+ * table v2 status page case we need to guarantee that the same page can
+ * only appear at a single GFN. While this is a property we want in
+ * general, for pages which can subsequently be freed this imperative:
+ * Upon freeing we wouldn't be able to find other mappings in the P2M
+ * (unless we did a brute force search).
+ */
prev_mfn = mfn_x(get_gfn(d, gpfn, &p2mt));
+
+ /* Remove previously mapped page if it was present. */
if ( p2mt == p2m_mmio_direct )
rc = -EPERM;
else if ( mfn_valid(prev_mfn) )
@@ -5728,29 +5740,22 @@ int xenmem_add_to_physmap_one(
/* Normal domain memory is freed, to avoid leaking memory. */
rc = guest_remove_page(d, gpfn);
}
- /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */
- put_gfn(d, gpfn);
-
- if ( rc )
- goto put_both;
/* Unmap from old location, if any. */
old_gpfn = get_gpfn_from_mfn(mfn);
ASSERT( old_gpfn != SHARED_M2P_ENTRY );
if ( (space == XENMAPSPACE_gmfn || space == XENMAPSPACE_gmfn_range) &&
old_gpfn != gfn )
- {
rc = -EXDEV;
- goto put_both;
- }
- if ( old_gpfn != INVALID_M2P_ENTRY )
+ else if ( !rc && old_gpfn != INVALID_M2P_ENTRY )
rc = guest_physmap_remove_page(d, old_gpfn, mfn, PAGE_ORDER_4K);
/* Map at new location. */
if ( !rc )
rc = guest_physmap_add_page(d, gpfn, mfn, PAGE_ORDER_4K);
- put_both:
+ put_gfn(d, gpfn);
+
/*
* In the XENMAPSPACE_gmfn case, we took a ref of the gfn at the top.
* We also may need to transfer ownership of the page reference to our