OSDN Git Service

KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs
authorDavid Matlack <dmatlack@google.com>
Wed, 22 Jun 2022 19:26:48 +0000 (15:26 -0400)
committerPaolo Bonzini <pbonzini@redhat.com>
Fri, 24 Jun 2022 08:51:52 +0000 (04:51 -0400)
commitbb924ca69f71b4f54c8f07be28e2b76f5ad1d2ac
tree81aa84f866d83e9d8e3f80c821f71d8343b79c01
parent83f6e109f562063ab7a1f54d99bcab2858b09ead
KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs

Commit fb58a9c345f6 ("KVM: x86/mmu: Optimize MMU page cache lookup for
fully direct MMUs") skipped the unsync checks and write flood clearing
for full direct MMUs. We can extend this further to skip the checks for
all direct shadow pages. Direct shadow pages in indirect MMUs (i.e.
shadow paging) are used when shadowing a guest huge page with smaller
pages. Such direct shadow pages, like their counterparts in fully direct
MMUs, are never marked unsynced or have a non-zero write-flooding count.

Checking sp->role.direct also generates better code than checking
direct_map because, due to register pressure, direct_map has to get
shoved onto the stack and then pulled back off.

No functional change intended.

Reviewed-by: Lai Jiangshan <jiangshanlai@gmail.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: David Matlack <dmatlack@google.com>
Message-Id: <20220516232138.1783324-2-dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/mmu/mmu.c