OSDN Git Service

KVM: MMU: Use list_for_each_entry_safe in kvm_mmu_commit_zap_page()
authorTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Wed, 6 Mar 2013 07:05:52 +0000 (16:05 +0900)
committerMarcelo Tosatti <mtosatti@redhat.com>
Thu, 7 Mar 2013 20:26:27 +0000 (17:26 -0300)
We are traversing the linked list, invalid_list, deleting each entry by
kvm_mmu_free_page().  _safe version is there for such a case.

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
arch/x86/kvm/mmu.c

index 3e4822b..0f42645 100644 (file)
@@ -2087,7 +2087,7 @@ static int kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp,
 static void kvm_mmu_commit_zap_page(struct kvm *kvm,
                                    struct list_head *invalid_list)
 {
-       struct kvm_mmu_page *sp;
+       struct kvm_mmu_page *sp, *nsp;
 
        if (list_empty(invalid_list))
                return;
@@ -2104,11 +2104,10 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
         */
        kvm_flush_remote_tlbs(kvm);
 
-       do {
-               sp = list_first_entry(invalid_list, struct kvm_mmu_page, link);
+       list_for_each_entry_safe(sp, nsp, invalid_list, link) {
                WARN_ON(!sp->role.invalid || sp->root_count);
                kvm_mmu_free_page(sp);
-       } while (!list_empty(invalid_list));
+       }
 }
 
 /*