OSDN Git Service

KVM: arm64: PMU: Sanitise PMCR_EL0.LP on first vcpu run
authorMarc Zyngier <maz@kernel.org>
Thu, 24 Nov 2022 10:44:59 +0000 (10:44 +0000)
committerMarc Zyngier <maz@kernel.org>
Mon, 28 Nov 2022 14:04:08 +0000 (14:04 +0000)
Userspace can play some dirty tricks on us by selecting a given
PMU version (such as PMUv3p5), restore a PMCR_EL0 value that
has PMCR_EL0.LP set, and then switch the PMU version to PMUv3p1,
for example. In this situation, we end-up with PMCR_EL0.LP being
set and spreading havoc in the PMU emulation.

This is specially hard as the first two step can be done on
one vcpu and the third step on another, meaning that we need
to sanitise *all* vcpus when the PMU version is changed.

In orer to avoid a pretty complicated locking situation,
defer the sanitisation of PMCR_EL0 to the point where the
vcpu is actually run for the first tine, using the existing
KVM_REQ_RELOAD_PMU request that calls into kvm_pmu_handle_pmcr().

There is still an obscure corner case where userspace could
do the above trick, and then save the VM without running it.
They would then observe an inconsistent state (PMUv3.1 + LP set),
but that state will be fixed on the first run anyway whenever
the guest gets restored on a host.

Reported-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
arch/arm64/kvm/pmu-emul.c
arch/arm64/kvm/sys_regs.c

index bb7251e..d8ea399 100644 (file)
@@ -538,6 +538,12 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
        if (!kvm_vcpu_has_pmu(vcpu))
                return;
 
+       /* Fixup PMCR_EL0 to reconcile the PMU version and the LP bit */
+       if (!kvm_pmu_is_3p5(vcpu))
+               val &= ~ARMV8_PMU_PMCR_LP;
+
+       __vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+
        if (val & ARMV8_PMU_PMCR_E) {
                kvm_pmu_enable_counter_mask(vcpu,
                       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
index eb56ad0..528d253 100644 (file)
@@ -693,15 +693,15 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
                return false;
 
        if (p->is_write) {
-               /* Only update writeable bits of PMCR */
+               /*
+                * Only update writeable bits of PMCR (continuing into
+                * kvm_pmu_handle_pmcr() as well)
+                */
                val = __vcpu_sys_reg(vcpu, PMCR_EL0);
                val &= ~ARMV8_PMU_PMCR_MASK;
                val |= p->regval & ARMV8_PMU_PMCR_MASK;
                if (!kvm_supports_32bit_el0())
                        val |= ARMV8_PMU_PMCR_LC;
-               if (!kvm_pmu_is_3p5(vcpu))
-                       val &= ~ARMV8_PMU_PMCR_LP;
-               __vcpu_sys_reg(vcpu, PMCR_EL0) = val;
                kvm_pmu_handle_pmcr(vcpu, val);
                kvm_vcpu_pmu_restore_guest(vcpu);
        } else {