OSDN Git Service

KVM: arm64: Fix handling of merging tables into a block entry
authorYanan Wang <wangyanan55@huawei.com>
Tue, 1 Dec 2020 20:10:33 +0000 (04:10 +0800)
committerMarc Zyngier <maz@kernel.org>
Wed, 2 Dec 2020 09:42:36 +0000 (09:42 +0000)
When dirty logging is enabled, we collapse block entries into tables
as necessary. If dirty logging gets canceled, we can end-up merging
tables back into block entries.

When this happens, we must not only free the non-huge page-table
pages but also invalidate all the TLB entries that can potentially
cover the block. Otherwise, we end-up with multiple possible translations
for the same physical page, which can legitimately result in a TLB
conflict.

To address this, replease the bogus invalidation by IPA with a full
VM invalidation. Although this is pretty heavy handed, it happens
very infrequently and saves a bunch of invalidations by IPA.

Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
[maz: fixup commit message]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201201201034.116760-3-wangyanan55@huawei.com
arch/arm64/kvm/hyp/pgtable.c

index 2beba1d..bdf8e55 100644 (file)
@@ -502,7 +502,13 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level,
                return 0;
 
        kvm_set_invalid_pte(ptep);
-       kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, 0);
+
+       /*
+        * Invalidate the whole stage-2, as we may have numerous leaf
+        * entries below us which would otherwise need invalidating
+        * individually.
+        */
+       kvm_call_hyp(__kvm_tlb_flush_vmid, data->mmu);
        data->anchor = ptep;
        return 0;
 }