OSDN Git Service

KVM: PPC: Validate all tces before updating tables
authorAlexey Kardashevskiy <aik@ozlabs.ru>
Mon, 10 Sep 2018 08:29:08 +0000 (18:29 +1000)
committerMichael Ellerman <mpe@ellerman.id.au>
Tue, 2 Oct 2018 13:09:26 +0000 (23:09 +1000)
The KVM TCE handlers are written in a way so they fail when either
something went horribly wrong or the userspace did some obvious mistake
such as passing a misaligned address.

We are going to enhance the TCE checker to fail on attempts to map bigger
IOMMU page than the underlying pinned memory so let's valitate TCE
beforehand.

This should cause no behavioral change.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
arch/powerpc/kvm/book3s_64_vio.c
arch/powerpc/kvm/book3s_64_vio_hv.c

index 9a3f264..3c17977 100644 (file)
@@ -599,6 +599,24 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu,
                ret = kvmppc_tce_validate(stt, tce);
                if (ret != H_SUCCESS)
                        goto unlock_exit;
+       }
+
+       for (i = 0; i < npages; ++i) {
+               /*
+                * This looks unsafe, because we validate, then regrab
+                * the TCE from userspace which could have been changed by
+                * another thread.
+                *
+                * But it actually is safe, because the relevant checks will be
+                * re-executed in the following code.  If userspace tries to
+                * change this dodgily it will result in a messier failure mode
+                * but won't threaten the host.
+                */
+               if (get_user(tce, tces + i)) {
+                       ret = H_TOO_HARD;
+                       goto unlock_exit;
+               }
+               tce = be64_to_cpu(tce);
 
                if (kvmppc_gpa_to_ua(vcpu->kvm,
                                tce & ~(TCE_PCI_READ | TCE_PCI_WRITE),
index 6821ead..c2848e0 100644 (file)
@@ -524,6 +524,10 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu,
                ret = kvmppc_tce_validate(stt, tce);
                if (ret != H_SUCCESS)
                        goto unlock_exit;
+       }
+
+       for (i = 0; i < npages; ++i) {
+               unsigned long tce = be64_to_cpu(((u64 *)tces)[i]);
 
                ua = 0;
                if (kvmppc_gpa_to_ua(vcpu->kvm,