Call cond_resched_lock() when zapping MMIO to reschedule if needed or to
release and reacquire mmu_lock in case of contention. There is no need
to flush or zap when temporarily dropping mmu_lock as zapping MMIO sptes
is done when holding the memslots lock and with the "update in-progress"
bit set in the memslots generation, which disables MMIO spte caching.
The walk does need to be restarted if mmu_lock is dropped as the active
pages list may be modified.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
if (!sp->mmio_cached)
continue;
- if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list))
+ if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list) ||
+ cond_resched_lock(&kvm->mmu_lock))
goto restart;
}