OSDN Git Service

uclinux-h8/linux.git
5 years agox86/kvm/mmu: get rid of redundant kvm_mmu_setup()
Paolo Bonzini [Mon, 8 Oct 2018 19:28:09 +0000 (21:28 +0200)]
x86/kvm/mmu: get rid of redundant kvm_mmu_setup()

Just inline the contents into the sole caller, kvm_init_mmu is now
public.

Suggested-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
5 years agox86/kvm/mmu: introduce guest_mmu
Vitaly Kuznetsov [Mon, 8 Oct 2018 19:28:08 +0000 (21:28 +0200)]
x86/kvm/mmu: introduce guest_mmu

When EPT is used for nested guest we need to re-init MMU as shadow
EPT MMU (nested_ept_init_mmu_context() does that). When we return back
from L2 to L1 kvm_mmu_reset_context() in nested_vmx_load_cr3() resets
MMU back to normal TDP mode. Add a special 'guest_mmu' so we can use
separate root caches; the improved hit rate is not very important for
single vCPU performance, but it avoids contention on the mmu_lock for
many vCPUs.

On the nested CPUID benchmark, with 16 vCPUs, an L2->L1->L2 vmexit
goes from 42k to 26k cycles.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/kvm/mmu.c: add kvm_mmu parameter to kvm_mmu_free_roots()
Vitaly Kuznetsov [Mon, 8 Oct 2018 19:28:07 +0000 (21:28 +0200)]
x86/kvm/mmu.c: add kvm_mmu parameter to kvm_mmu_free_roots()

Add an option to specify which MMU root we want to free. This will
be used when nested and non-nested MMUs for L1 are split.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
5 years agox86/kvm/mmu.c: set get_pdptr hook in kvm_init_shadow_ept_mmu()
Vitaly Kuznetsov [Mon, 8 Oct 2018 19:28:06 +0000 (21:28 +0200)]
x86/kvm/mmu.c: set get_pdptr hook in kvm_init_shadow_ept_mmu()

kvm_init_shadow_ept_mmu() doesn't set get_pdptr() hook and is this
not a problem just because MMU context is already initialized and this
hook points to kvm_pdptr_read(). As we're intended to use a dedicated
MMU for shadow EPT MMU set this hook explicitly.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
5 years agox86/kvm/mmu: make vcpu->mmu a pointer to the current MMU
Vitaly Kuznetsov [Mon, 8 Oct 2018 19:28:05 +0000 (21:28 +0200)]
x86/kvm/mmu: make vcpu->mmu a pointer to the current MMU

As a preparation to full MMU split between L1 and L2 make vcpu->arch.mmu
a pointer to the currently used mmu. For now, this is always
vcpu->arch.root_mmu. No functional change.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
5 years agokvm: x86: optimize dr6 restore
Paolo Bonzini [Wed, 22 Aug 2018 17:59:33 +0000 (19:59 +0200)]
kvm: x86: optimize dr6 restore

The quote from the comment almost says it all: we are currently zeroing
the guest dr6 in kvm_arch_vcpu_put, because do_debug expects it.  However,
the host %dr6 is either:

- zero because the guest hasn't run after kvm_arch_vcpu_load

- written from vcpu->arch.dr6 by vcpu_enter_guest

- written by the guest and copied to vcpu->arch.dr6 by ->sync_dirty_debug_regs().

Therefore, we can skip the write if vcpu->arch.dr6 is already zero.  We
may do extra useless writes if vcpu->arch.dr6 is nonzero but the guest
hasn't run; however that is less important for performance.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: hyperv: optimize sparse VP set processing
Vitaly Kuznetsov [Wed, 10 Oct 2018 15:14:38 +0000 (17:14 +0200)]
KVM: x86: hyperv: optimize sparse VP set processing

Rewrite kvm_hv_flush_tlb()/send_ipi_vcpus_mask() making them cleaner and
somewhat more optimal.

hv_vcpu_in_sparse_set() is converted to sparse_set_to_vcpu_mask()
which copies sparse banks u64-at-a-time and then, depending on the
num_mismatched_vp_indexes value, returns immediately or does
vp index to vcpu index conversion by walking all vCPUs.

To support the change and make kvm_hv_send_ipi() look similar to
kvm_hv_flush_tlb() send_ipi_vcpus_mask() is introduced.

Suggested-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: hyperv: fix 'tlb_lush' typo
Vitaly Kuznetsov [Mon, 8 Oct 2018 17:19:04 +0000 (19:19 +0200)]
KVM: x86: hyperv: fix 'tlb_lush' typo

Regardless of whether your TLB is lush or not it still needs flushing.

Reported-by: Roman Kagan <rkagan@virtuozzo.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: WARN if nested run hits VMFail with early consistency checks enabled
Sean Christopherson [Wed, 26 Sep 2018 16:23:58 +0000 (09:23 -0700)]
KVM: nVMX: WARN if nested run hits VMFail with early consistency checks enabled

When early consistency checks are enabled, all VMFail conditions
should be caught by nested_vmx_check_vmentry_hw().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: add option to perform early consistency checks via H/W
Sean Christopherson [Wed, 26 Sep 2018 16:23:57 +0000 (09:23 -0700)]
KVM: nVMX: add option to perform early consistency checks via H/W

KVM defers many VMX consistency checks to the CPU, ostensibly for
performance reasons[1], including checks that result in VMFail (as
opposed to VMExit).  This behavior may be undesirable for some users
since this means KVM detects certain classes of VMFail only after it
has processed guest state, e.g. emulated MSR load-on-entry.  Because
there is a strict ordering between checks that cause VMFail and those
that cause VMExit, i.e. all VMFail checks are performed before any
checks that cause VMExit, we can detect (almost) all VMFail conditions
via a dry run of sorts.  The almost qualifier exists because some
state in vmcs02 comes from L0, e.g. VPID, which means that hardware
will never detect an invalid VPID in vmcs12 because it never sees
said value.  Software must (continue to) explicitly check such fields.

After preparing vmcs02 with all state needed to pass the VMFail
consistency checks, optionally do a "test" VMEnter with an invalid
GUEST_RFLAGS.  If the VMEnter results in a VMExit (due to bad guest
state), then we can safely say that the nested VMEnter should not
VMFail, i.e. any VMFail encountered in nested_vmx_vmexit() must
be due to an L0 bug.  GUEST_RFLAGS is used to induce VMExit as it
is unconditionally loaded on all implementations of VMX, has an
invalid value that is writable on a 32-bit system and its consistency
check is performed relatively early in all implementations (the exact
order of consistency checks is micro-architectural).

Unfortunately, since the "passing" case causes a VMExit, KVM must
be extra diligent to ensure that host state is restored, e.g. DR7
and RFLAGS are reset on VMExit.  Failure to restore RFLAGS.IF is
particularly fatal.

And of course the extra VMEnter and VMExit impacts performance.
The raw overhead of the early consistency checks is ~6% on modern
hardware (though this could easily vary based on configuration),
while the added latency observed from the L1 VMM is ~10%.  The
early consistency checks do not occur in a vacuum, e.g. spending
more time in L0 can lead to more interrupts being serviced while
emulating VMEnter, thereby increasing the latency observed by L1.

Add a module param, early_consistency_checks, to provide control
over whether or not VMX performs the early consistency checks.
In addition to standard on/off behavior, the param accepts a value
of -1, which is essentialy an "auto" setting whereby KVM does
the early checks only when it thinks it's running on bare metal.
When running nested, doing early checks is of dubious value since
the resulting behavior is heavily dependent on L0.  In the future,
the "auto" setting could also be used to default to skipping the
early hardware checks for certain configurations/platforms if KVM
reaches a state where it has 100% coverage of VMFail conditions.

[1] To my knowledge no one has implemented and tested full software
    emulation of the VMFail consistency checks.  Until that happens,
    one can only speculate about the actual performance overhead of
    doing all VMFail consistency checks in software.  Obviously any
    code is slower than no code, but in the grand scheme of nested
    virtualization it's entirely possible the overhead is negligible.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: vmx: write HOST_IA32_EFER in vmx_set_constant_host_state()
Sean Christopherson [Wed, 26 Sep 2018 16:23:56 +0000 (09:23 -0700)]
KVM: vmx: write HOST_IA32_EFER in vmx_set_constant_host_state()

EFER is constant in the host and writing it once during setup means
we can skip writing the host value in add_atomic_switch_msr_special().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: call kvm_skip_emulated_instruction in nested_vmx_{fail,succeed}
Sean Christopherson [Wed, 26 Sep 2018 16:23:55 +0000 (09:23 -0700)]
KVM: nVMX: call kvm_skip_emulated_instruction in nested_vmx_{fail,succeed}

... as every invocation of nested_vmx_{fail,succeed} is immediately
followed by a call to kvm_skip_emulated_instruction().  This saves
a bit of code and eliminates some silly paths, e.g. nested_vmx_run()
ended up with a goto label purely used to call and return
kvm_skip_emulated_instruction().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: do not call nested_vmx_succeed() for consistency check VMExit
Sean Christopherson [Wed, 26 Sep 2018 16:23:54 +0000 (09:23 -0700)]
KVM: nVMX: do not call nested_vmx_succeed() for consistency check VMExit

EFLAGS is set to a fixed value on VMExit, calling nested_vmx_succeed()
is unnecessary and wrong.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: do not skip VMEnter instruction that succeeds
Sean Christopherson [Wed, 26 Sep 2018 16:23:53 +0000 (09:23 -0700)]
KVM: nVMX: do not skip VMEnter instruction that succeeds

A successful VMEnter is essentially a fancy indirect branch that
pulls the target RIP from the VMCS.  Skipping the instruction is
unnecessary (RIP will get overwritten by the VMExit handler) and
is problematic because it can incorrectly suppress a #DB due to
EFLAGS.TF when a VMFail is detected by hardware (happens after we
skip the instruction).

Now that vmx_nested_run() is not prematurely skipping the instr,
use the full kvm_skip_emulated_instruction() in the VMFail path
of nested_vmx_vmexit().  We also need to explicitly update the
GUEST_INTERRUPTIBILITY_INFO when loading vmcs12 host state.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: do early preparation of vmcs02 before check_vmentry_postreqs()
Sean Christopherson [Wed, 26 Sep 2018 16:23:52 +0000 (09:23 -0700)]
KVM: nVMX: do early preparation of vmcs02 before check_vmentry_postreqs()

In anticipation of using vmcs02 to do early consistency checks, move
the early preparation of vmcs02 prior to checking the postreqs.  The
downside of this approach is that we'll unnecessary load vmcs02 in
the case that check_vmentry_postreqs() fails, but that is essentially
our slow path anyways (not actually slow, but it's the path we don't
really care about optimizing).

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: initialize vmcs02 constant exactly once (per VMCS)
Sean Christopherson [Wed, 26 Sep 2018 16:23:51 +0000 (09:23 -0700)]
KVM: nVMX: initialize vmcs02 constant exactly once (per VMCS)

Add a dedicated flag to track if vmcs02 has been initialized, i.e.
the constant state for vmcs02 has been written to the backing VMCS.
The launched flag (in struct loaded_vmcs) gets cleared on logical
CPU migration to mirror hardware behavior[1], i.e. using the launched
flag to determine whether or not vmcs02 constant state needs to be
initialized results in unnecessarily re-initializing the VMCS when
migrating between logical CPUS.

[1] The active VMCS needs to be VMCLEARed before it can be migrated
    to a different logical CPU.  Hardware's VMCS cache is per-CPU
    and is not coherent between CPUs.  VMCLEAR flushes the cache so
    that any dirty data is written back to memory.  A side effect
    of VMCLEAR is that it also clears the VMCS's internal launch
    flag, which KVM must mirror because VMRESUME must be used to
    run a previously launched VMCS.

Suggested-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: split pieces of prepare_vmcs02() to prepare_vmcs02_early()
Sean Christopherson [Wed, 26 Sep 2018 16:23:50 +0000 (09:23 -0700)]
KVM: nVMX: split pieces of prepare_vmcs02() to prepare_vmcs02_early()

Add prepare_vmcs02_early() and move pieces of prepare_vmcs02() to the
new function.  prepare_vmcs02_early() writes the bits of vmcs02 that
a) must be in place to pass the VMFail consistency checks (assuming
vmcs12 is valid) and b) are needed recover from a VMExit, e.g. host
state that is loaded on VMExit.  Splitting the functionality will
enable KVM to leverage hardware to do VMFail consistency checks via
a dry run of VMEnter and recover from a potential VMExit without
having to fully initialize vmcs02.

Add prepare_vmcs02_constant_state() to handle writing vmcs02 state that
comes from vmcs01 and never changes, i.e. we don't need to rewrite any
of the vmcs02 that is effectively constant once defined.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: remove ASSERT() on vmx->pml_pg validity
Sean Christopherson [Wed, 26 Sep 2018 16:23:49 +0000 (09:23 -0700)]
KVM: VMX: remove ASSERT() on vmx->pml_pg validity

vmx->pml_pg is allocated by vmx_create_vcpu() and is only nullified
when the vCPU is destroyed by vmx_free_vcpu().  Remove the ASSERTs
on vmx->pml_pg, there is no need to carry debug code that provides
no value to the current code base.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: vVMX: rename label for post-enter_guest_mode consistency check
Sean Christopherson [Wed, 26 Sep 2018 16:23:48 +0000 (09:23 -0700)]
KVM: vVMX: rename label for post-enter_guest_mode consistency check

Rename 'fail' to 'vmentry_fail_vmexit_guest_mode' to make it more
obvious that it's simply a different entry point to the VMExit path,
whose purpose is unwind the updates done prior to calling
prepare_vmcs02().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: assimilate nested_vmx_entry_failure() into nested_vmx_enter_non_root_mode()
Sean Christopherson [Wed, 26 Sep 2018 16:23:47 +0000 (09:23 -0700)]
KVM: nVMX: assimilate nested_vmx_entry_failure() into nested_vmx_enter_non_root_mode()

Handling all VMExits due to failed consistency checks on VMEnter in
nested_vmx_enter_non_root_mode() consolidates all relevant code into
a single location, and removing nested_vmx_entry_failure() eliminates
a confusing function name and label.  For a VMEntry, "fail" and its
derivatives has a very specific meaning due to the different behavior
of a VMEnter VMFail versus VMExit, i.e. it wasn't obvious that
nested_vmx_entry_failure() handled VMExit scenarios.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: move check_vmentry_postreqs() call to nested_vmx_enter_non_root_mode()
Sean Christopherson [Wed, 26 Sep 2018 16:23:46 +0000 (09:23 -0700)]
KVM: nVMX: move check_vmentry_postreqs() call to nested_vmx_enter_non_root_mode()

In preparation of supporting checkpoint/restore for nested state,
commit ca0bde28f2ed ("kvm: nVMX: Split VMCS checks from nested_vmx_run()")
modified check_vmentry_postreqs() to only perform the guest EFER
consistency checks when nested_run_pending is true.  But, in the
normal nested VMEntry flow, nested_run_pending is only set after
check_vmentry_postreqs(), i.e. the consistency check is being skipped.

Alternatively, nested_run_pending could be set prior to calling
check_vmentry_postreqs() in nested_vmx_run(), but placing the
consistency checks in nested_vmx_enter_non_root_mode() allows us
to split prepare_vmcs02() and interleave the preparation with
the consistency checks without having to change the call sites
of nested_vmx_enter_non_root_mode().  In other words, the rest
of the consistency check code in nested_vmx_run() will be joining
the postreqs checks in future patches.

Fixes: ca0bde28f2ed ("kvm: nVMX: Split VMCS checks from nested_vmx_run()")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Jim Mattson <jmattson@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: rename enter_vmx_non_root_mode to nested_vmx_enter_non_root_mode
Sean Christopherson [Wed, 26 Sep 2018 16:23:45 +0000 (09:23 -0700)]
KVM: nVMX: rename enter_vmx_non_root_mode to nested_vmx_enter_non_root_mode

...to be more consistent with the nested VMX nomenclature.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: try to set EFER bits correctly when initializing controls
Sean Christopherson [Wed, 26 Sep 2018 16:23:44 +0000 (09:23 -0700)]
KVM: nVMX: try to set EFER bits correctly when initializing controls

VM_ENTRY_IA32E_MODE and VM_{ENTRY,EXIT}_LOAD_IA32_EFER will be
explicitly set/cleared as needed by vmx_set_efer(), but attempt
to get the bits set correctly when intializing the control fields.
Setting the value correctly can avoid multiple VMWrites.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: vmx: do not unconditionally clear EFER switching
Sean Christopherson [Wed, 26 Sep 2018 16:23:43 +0000 (09:23 -0700)]
KVM: vmx: do not unconditionally clear EFER switching

Do not unconditionally call clear_atomic_switch_msr() when updating
EFER.  This adds up to four unnecessary VMWrites in the case where
guest_efer != host_efer, e.g. if the load_on_{entry,exit} bits were
already set.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: reset cache/shadows when switching loaded VMCS
Sean Christopherson [Wed, 26 Sep 2018 16:23:42 +0000 (09:23 -0700)]
KVM: nVMX: reset cache/shadows when switching loaded VMCS

Reset the vm_{entry,exit}_controls_shadow variables as well as the
segment cache after loading a new VMCS in vmx_switch_vmcs().  The
shadows/cache track VMCS data, i.e. they're stale every time we
switch to a new VMCS regardless of reason.

This fixes a bug where stale control shadows would be consumed after
a nested VMExit due to a failed consistency check.

Suggested-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: use vm_exit_controls_init() to write exit controls for vmcs02
Sean Christopherson [Wed, 26 Sep 2018 16:23:41 +0000 (09:23 -0700)]
KVM: nVMX: use vm_exit_controls_init() to write exit controls for vmcs02

Write VM_EXIT_CONTROLS using vm_exit_controls_init() when configuring
vmcs02, otherwise vm_exit_controls_shadow will be stale.  EFER in
particular can be corrupted if VM_EXIT_LOAD_IA32_EFER is not updated
due to an incorrect shadow optimization, which can crash L0 due to
EFER not being loaded on exit.  This does not occur with the current
code base simply because update_transition_efer() unconditionally
clears VM_EXIT_LOAD_IA32_EFER before conditionally setting it, and
because a nested guest always starts with VM_EXIT_LOAD_IA32_EFER
clear, i.e. we'll only ever unnecessarily clear the bit.  That is,
until someone optimizes update_transition_efer()...

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: move vmcs12 EPTP consistency check to check_vmentry_prereqs()
Sean Christopherson [Wed, 26 Sep 2018 16:23:40 +0000 (09:23 -0700)]
KVM: nVMX: move vmcs12 EPTP consistency check to check_vmentry_prereqs()

An invalid EPTP causes a VMFail(VMXERR_ENTRY_INVALID_CONTROL_FIELD),
not a VMExit.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: move host EFER consistency checks to VMFail path
Sean Christopherson [Wed, 26 Sep 2018 16:23:39 +0000 (09:23 -0700)]
KVM: nVMX: move host EFER consistency checks to VMFail path

Invalid host state related to loading EFER on VMExit causes a
VMFail(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD), not a VMExit.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: leverage change to adjust slots->used_slots in update_memslots()
Wei Yang [Wed, 22 Aug 2018 13:57:11 +0000 (21:57 +0800)]
KVM: leverage change to adjust slots->used_slots in update_memslots()

update_memslots() is only called by __kvm_set_memory_region(), in which
"change" is calculated and indicates how to adjust slots->used_slots

  * increase by one if it is KVM_MR_CREATE
  * decrease by one if it is KVM_MR_DELETE
  * not change for others

This patch adjusts slots->used_slots in update_memslots() based on "change"
value instead of re-calculate those states again.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Always reflect #NM VM-exits to L1
Jim Mattson [Thu, 13 Sep 2018 18:54:48 +0000 (11:54 -0700)]
KVM: nVMX: Always reflect #NM VM-exits to L1

When bit 3 (corresponding to CR0.TS) of the VMCS12 cr0_guest_host_mask
field is clear, the VMCS12 guest_cr0 field does not necessarily hold
the current value of the L2 CR0.TS bit, so the code that checked for
L2's CR0.TS bit being set was incorrect. Moreover, I'm not sure that
the CR0.TS check was adequate. (What if L2's CR0.EM was set, for
instance?)

Fortunately, lazy FPU has gone away, so L0 has lost all interest in
intercepting #NM exceptions. See commit bd7e5b0899a4 ("KVM: x86:
remove code for lazy FPU handling"). Therefore, there is no longer any
question of which hypervisor gets first dibs. The #NM VM-exit should
always be reflected to L1. (Note that the corresponding bit must be
set in the VMCS12 exception_bitmap field for there to be an #NM
VM-exit at all.)

Fixes: ccf9844e5d99c ("kvm, vmx: Really fix lazy FPU on nested guest")
Reported-by: Abhiroop Dabral <adabral@paloaltonetworks.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Tested-by: Abhiroop Dabral <adabral@paloaltonetworks.com>
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: hyperv: implement PV IPI send hypercalls
Vitaly Kuznetsov [Wed, 26 Sep 2018 17:02:59 +0000 (19:02 +0200)]
KVM: x86: hyperv: implement PV IPI send hypercalls

Using hypercall for sending IPIs is faster because this allows to specify
any number of vCPUs (even > 64 with sparse CPU set), the whole procedure
will take only one VMEXIT.

Current Hyper-V TLFS (v5.0b) claims that HvCallSendSyntheticClusterIpi
hypercall can't be 'fast' (passing parameters through registers) but
apparently this is not true, Windows always uses it as 'fast' so we need
to support that.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: hyperv: optimize kvm_hv_flush_tlb() for vp_index == vcpu_idx case
Vitaly Kuznetsov [Wed, 26 Sep 2018 17:02:58 +0000 (19:02 +0200)]
KVM: x86: hyperv: optimize kvm_hv_flush_tlb() for vp_index == vcpu_idx case

VP inedx almost always matches VCPU and when it does it's faster to walk
the sparse set instead of all vcpus.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: hyperv: valid_bank_mask should be 'u64'
Vitaly Kuznetsov [Wed, 26 Sep 2018 17:02:57 +0000 (19:02 +0200)]
KVM: x86: hyperv: valid_bank_mask should be 'u64'

This probably doesn't matter much (KVM_MAX_VCPUS is much lower nowadays)
but valid_bank_mask is really u64 and not unsigned long.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: hyperv: keep track of mismatched VP indexes
Vitaly Kuznetsov [Wed, 26 Sep 2018 17:02:56 +0000 (19:02 +0200)]
KVM: x86: hyperv: keep track of mismatched VP indexes

In most common cases VP index of a vcpu matches its vcpu index. Userspace
is, however, free to set any mapping it wishes and we need to account for
that when we need to find a vCPU with a particular VP index. To keep search
algorithms optimal in both cases introduce 'num_mismatched_vp_indexes'
counter showing how many vCPUs with mismatching VP index we have. In case
the counter is zero we can assume vp_index == vcpu_idx.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: hyperv: consistently use 'hv_vcpu' for 'struct kvm_vcpu_hv' variables
Vitaly Kuznetsov [Wed, 26 Sep 2018 17:02:55 +0000 (19:02 +0200)]
KVM: x86: hyperv: consistently use 'hv_vcpu' for 'struct kvm_vcpu_hv' variables

Rename 'hv' to 'hv_vcpu' in kvm_hv_set_msr/kvm_hv_get_msr(); 'hv' is
'reserved' for 'struct kvm_hv' variables across the file.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: hyperv: optimize 'all cpus' case in kvm_hv_flush_tlb()
Vitaly Kuznetsov [Wed, 22 Aug 2018 10:18:29 +0000 (12:18 +0200)]
KVM: x86: hyperv: optimize 'all cpus' case in kvm_hv_flush_tlb()

We can use 'NULL' to represent 'all cpus' case in
kvm_make_vcpus_request_mask() and avoid building vCPU mask with
all vCPUs.

Suggested-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: hyperv: enforce vp_index < KVM_MAX_VCPUS
Vitaly Kuznetsov [Wed, 22 Aug 2018 10:18:28 +0000 (12:18 +0200)]
KVM: x86: hyperv: enforce vp_index < KVM_MAX_VCPUS

Hyper-V TLFS (5.0b) states:

> Virtual processors are identified by using an index (VP index). The
> maximum number of virtual processors per partition supported by the
> current implementation of the hypervisor can be obtained through CPUID
> leaf 0x40000005. A virtual processor index must be less than the
> maximum number of virtual processors per partition.

Forbid userspace to set VP_INDEX above KVM_MAX_VCPUS. get_vcpu_by_vpidx()
can now be optimized to bail early when supplied vpidx is >= KVM_MAX_VCPUS.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm/x86: return meaningful value from KVM_SIGNAL_MSI
Paolo Bonzini [Mon, 1 Oct 2018 14:07:18 +0000 (16:07 +0200)]
kvm/x86: return meaningful value from KVM_SIGNAL_MSI

If kvm_apic_map_get_dest_lapic() finds a disabled LAPIC,
it will return with bitmap==0 and (*r == -1) will be returned to
userspace.

QEMU may then record "KVM: injection failed, MSI lost
(Operation not permitted)" in its log, which is quite puzzling.

Reported-by: Peng Hao <penghao122@sina.com.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: move definition PT_MAX_HUGEPAGE_LEVEL and KVM_NR_PAGE_SIZES together
Wei Yang [Fri, 28 Sep 2018 14:08:50 +0000 (22:08 +0800)]
KVM: x86: move definition PT_MAX_HUGEPAGE_LEVEL and KVM_NR_PAGE_SIZES together

Currently, there are two definitions related to huge page, but a little bit
far from each other and seems loosely connected:

 * KVM_NR_PAGE_SIZES defines the number of different size a page could map
 * PT_MAX_HUGEPAGE_LEVEL means the maximum level of huge page

The number of different size a page could map equals the maximum level
of huge page, which is implied by current definition.

While current implementation may not be kind to readers and further
developers:

 * KVM_NR_PAGE_SIZES looks like a stand alone definition at first sight
 * in case we need to support more level, two places need to change

This patch tries to make these two definition more close, so that reader
and developer would feel more comfortable to manipulate.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM/VMX: Remve unused function is_external_interrupt().
Tianyu Lan [Fri, 28 Sep 2018 12:45:16 +0000 (12:45 +0000)]
KVM/VMX: Remve unused function is_external_interrupt().

is_external_interrupt() is not used now and so remove it.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: return 0 in case kvm_mmu_memory_cache has min number of objects
Wei Yang [Tue, 4 Sep 2018 15:57:32 +0000 (23:57 +0800)]
KVM: x86: return 0 in case kvm_mmu_memory_cache has min number of objects

The code tries to pre-allocate *min* number of objects, so it is ok to
return 0 when the kvm_mmu_memory_cache meets the requirement.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agonVMX x86: Make nested_vmx_check_pml_controls() concise
Krish Sadhukhan [Thu, 27 Sep 2018 18:33:27 +0000 (14:33 -0400)]
nVMX x86: Make nested_vmx_check_pml_controls() concise

Suggested-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: adjust kvm_mmu_page member to save 8 bytes
Wei Yang [Wed, 5 Sep 2018 21:58:16 +0000 (05:58 +0800)]
KVM: x86: adjust kvm_mmu_page member to save 8 bytes

On a 64bits machine, struct is naturally aligned with 8 bytes. Since
kvm_mmu_page member *unsync* and *role* are less then 4 bytes, we can
rearrange the sequence to compace the struct.

As the comment shows, *role* and *gfn* are used to key the shadow page. In
order to keep the comment valid, this patch moves the *unsync* up and
exchange the position of *role* and *gfn*.

From /proc/slabinfo, it shows the size of kvm_mmu_page is 8 bytes less and
with one more object per slap after applying this patch.

    # name            <active_objs> <num_objs> <objsize> <objperslab>
    kvm_mmu_page_header      0           0       168         24

    kvm_mmu_page_header      0           0       160         25

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: restore host state in nested_vmx_vmexit for VMFail
Sean Christopherson [Wed, 22 Aug 2018 21:57:07 +0000 (14:57 -0700)]
KVM: nVMX: restore host state in nested_vmx_vmexit for VMFail

A VMEnter that VMFails (as opposed to VMExits) does not touch host
state beyond registers that are explicitly noted in the VMFail path,
e.g. EFLAGS.  Host state does not need to be loaded because VMFail
is only signaled for consistency checks that occur before the CPU
starts to load guest state, i.e. there is no need to restore any
state as nothing has been modified.  But in the case where a VMFail
is detected by hardware and not by KVM (due to deferring consistency
checks to hardware), KVM has already loaded some amount of guest
state.  Luckily, "loaded" only means loaded to KVM's software model,
i.e. vmcs01 has not been modified.  So, unwind our software model to
the pre-VMEntry host state.

Not restoring host state in this VMFail path leads to a variety of
failures because we end up with stale data in vcpu->arch, e.g. CR0,
CR4, EFER, etc... will all be out of sync relative to vmcs01.  Any
significant delta in the stale data is all but guaranteed to crash
L1, e.g. emulation of SMEP, SMAP, UMIP, WP, etc... will be wrong.

An alternative to this "soft" reload would be to load host state from
vmcs12 as if we triggered a VMExit (as opposed to VMFail), but that is
wildly inconsistent with respect to the VMX architecture, e.g. an L1
VMM with separate VMExit and VMFail paths would explode.

Note that this approach does not mean KVM is 100% accurate with
respect to VMX hardware behavior, even at an architectural level
(the exact order of consistency checks is microarchitecture specific).
But 100% emulation accuracy isn't the goal (with this patch), rather
the goal is to be consistent in the information delivered to L1, e.g.
a VMExit should not fall-through VMENTER, and a VMFail should not jump
to HOST_RIP.

This technically reverts commit "5af4157388ad (KVM: nVMX: Fix mmu
context after VMLAUNCH/VMRESUME failure)", but retains the core
aspects of that patch, just in an open coded form due to the need to
pull state from vmcs01 instead of vmcs12.  Restoring host state
resolves a variety of issues introduced by commit "4f350c6dbcb9
(kvm: nVMX: Handle deferred early VMLAUNCH/VMRESUME failure properly)",
which remedied the incorrect behavior of treating VMFail like VMExit
but in doing so neglected to restore arch state that had been modified
prior to attempting nested VMEnter.

A sample failure that occurs due to stale vcpu.arch state is a fault
of some form while emulating an LGDT (due to emulated UMIP) from L1
after a failed VMEntry to L3, in this case when running the KVM unit
test test_tpr_threshold_values in L1.  L0 also hits a WARN in this
case due to a stale arch.cr4.UMIP.

L1:
  BUG: unable to handle kernel paging request at ffffc90000663b9e
  PGD 276512067 P4D 276512067 PUD 276513067 PMD 274efa067 PTE 8000000271de2163
  Oops: 0009 [#1] SMP
  CPU: 5 PID: 12495 Comm: qemu-system-x86 Tainted: G        W         4.18.0-rc2+ #2
  Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
  RIP: 0010:native_load_gdt+0x0/0x10

  ...

  Call Trace:
   load_fixmap_gdt+0x22/0x30
   __vmx_load_host_state+0x10e/0x1c0 [kvm_intel]
   vmx_switch_vmcs+0x2d/0x50 [kvm_intel]
   nested_vmx_vmexit+0x222/0x9c0 [kvm_intel]
   vmx_handle_exit+0x246/0x15a0 [kvm_intel]
   kvm_arch_vcpu_ioctl_run+0x850/0x1830 [kvm]
   kvm_vcpu_ioctl+0x3a1/0x5c0 [kvm]
   do_vfs_ioctl+0x9f/0x600
   ksys_ioctl+0x66/0x70
   __x64_sys_ioctl+0x16/0x20
   do_syscall_64+0x4f/0x100
   entry_SYSCALL_64_after_hwframe+0x44/0xa9

L0:
  WARNING: CPU: 2 PID: 3529 at arch/x86/kvm/vmx.c:6618 handle_desc+0x28/0x30 [kvm_intel]
  ...
  CPU: 2 PID: 3529 Comm: qemu-system-x86 Not tainted 4.17.2-coffee+ #76
  Hardware name: Intel Corporation Kabylake Client platform/KBL S
  RIP: 0010:handle_desc+0x28/0x30 [kvm_intel]

  ...

  Call Trace:
   kvm_arch_vcpu_ioctl_run+0x863/0x1840 [kvm]
   kvm_vcpu_ioctl+0x3a1/0x5c0 [kvm]
   do_vfs_ioctl+0x9f/0x5e0
   ksys_ioctl+0x66/0x70
   __x64_sys_ioctl+0x16/0x20
   do_syscall_64+0x49/0xf0
   entry_SYSCALL_64_after_hwframe+0x44/0xa9

Fixes: 5af4157388ad (KVM: nVMX: Fix mmu context after VMLAUNCH/VMRESUME failure)
Fixes: 4f350c6dbcb9 (kvm: nVMX: Handle deferred early VMLAUNCH/VMRESUME failure properly)
Cc: Jim Mattson <jmattson@google.com>
Cc: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim KrÄmář <rkrcmar@redhat.com>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Clear reserved bits of #DB exit qualification
Jim Mattson [Fri, 21 Sep 2018 17:36:17 +0000 (10:36 -0700)]
KVM: nVMX: Clear reserved bits of #DB exit qualification

According to volume 3 of the SDM, bits 63:15 and 12:4 of the exit
qualification field for debug exceptions are reserved (cleared to
0). However, the SDM is incorrect about bit 16 (corresponding to
DR6.RTM). This bit should be set if a debug exception (#DB) or a
breakpoint exception (#BP) occurred inside an RTM region while
advanced debugging of RTM transactional regions was enabled. Note that
this is the opposite of DR6.RTM, which "indicates (when clear) that a
debug exception (#DB) or breakpoint exception (#BP) occurred inside an
RTM region while advanced debugging of RTM transactional regions was
enabled."

There is still an issue with stale DR6 bits potentially being
misreported for the current debug exception.  DR6 should not have been
modified before vectoring the #DB exception, and the "new DR6 bits"
should be available somewhere, but it was and they aren't.

Fixes: b96fb439774e1 ("KVM: nVMX: fixes to nested virt interrupt injection")
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: selftests: support high GPAs in dirty_log_test
Andrew Jones [Tue, 18 Sep 2018 17:54:36 +0000 (19:54 +0200)]
kvm: selftests: support high GPAs in dirty_log_test

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: selftests: stop lying to aarch64 tests about PA-bits
Andrew Jones [Tue, 18 Sep 2018 17:54:35 +0000 (19:54 +0200)]
kvm: selftests: stop lying to aarch64 tests about PA-bits

Let's add the 40 PA-bit versions of the VM modes, that AArch64
should have been using, so we can extend the dirty log test without
breaking things.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: selftests: dirty_log_test: also test 64K pages on aarch64
Andrew Jones [Tue, 18 Sep 2018 17:54:34 +0000 (19:54 +0200)]
kvm: selftests: dirty_log_test: also test 64K pages on aarch64

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: selftests: port dirty_log_test to aarch64
Andrew Jones [Tue, 18 Sep 2018 17:54:32 +0000 (19:54 +0200)]
kvm: selftests: port dirty_log_test to aarch64

While we're messing with the code for the port and to support guest
page sizes that are less than the host page size, we also make some
code formatting cleanups and apply sync_global_to_guest().

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: selftests: introduce new VM mode for 64K pages
Andrew Jones [Tue, 18 Sep 2018 17:54:33 +0000 (19:54 +0200)]
kvm: selftests: introduce new VM mode for 64K pages

Rename VM_MODE_FLAT48PG to be more descriptive of its config and add a
new config that has the same parameters, except with 64K pages.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: selftests: add vcpu support for aarch64
Andrew Jones [Tue, 18 Sep 2018 17:54:31 +0000 (19:54 +0200)]
kvm: selftests: add vcpu support for aarch64

This code adds VM and VCPU setup code for the VM_MODE_FLAT48PG mode.
The VM_MODE_FLAT48PG isn't yet fully supportable, as it defines the
guest physical address limit as 52-bits, and KVM currently only
supports guests with up to 40-bit physical addresses (see
KVM_PHYS_SHIFT). VM_MODE_FLAT48PG will work fine, though, as long as
no >= 40-bit physical addresses are used.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: selftests: add virt mem support for aarch64
Andrew Jones [Tue, 18 Sep 2018 17:54:30 +0000 (19:54 +0200)]
kvm: selftests: add virt mem support for aarch64

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: selftests: add vm_phy_pages_alloc
Andrew Jones [Tue, 18 Sep 2018 17:54:29 +0000 (19:54 +0200)]
kvm: selftests: add vm_phy_pages_alloc

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: selftests: tidy up kvm_util
Andrew Jones [Tue, 18 Sep 2018 17:54:28 +0000 (19:54 +0200)]
kvm: selftests: tidy up kvm_util

Tidy up kvm-util code: code/comment formatting, remove unused code,
and move x86 specific code out. We also move vcpu_dump() out of
common code, because not all arches (AArch64) have KVM_GET_REGS.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: selftests: add cscope make target
Andrew Jones [Tue, 18 Sep 2018 17:54:27 +0000 (19:54 +0200)]
kvm: selftests: add cscope make target

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: selftests: move arch-specific files to arch-specific locations
Andrew Jones [Tue, 18 Sep 2018 17:54:26 +0000 (19:54 +0200)]
kvm: selftests: move arch-specific files to arch-specific locations

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: selftests: introduce ucall
Andrew Jones [Tue, 18 Sep 2018 17:54:25 +0000 (19:54 +0200)]
kvm: selftests: introduce ucall

Rework the guest exit to userspace code to generalize the concept
into what it is, a "hypercall to userspace", and provide two
implementations of it: the PortIO version currently used, but only
useable by x86, and an MMIO version that other architectures (except
s390) can use.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: selftests: vcpu_setup: set cr4.osfxsr
Andrew Jones [Tue, 18 Sep 2018 17:54:24 +0000 (19:54 +0200)]
kvm: selftests: vcpu_setup: set cr4.osfxsr

Guest code may want to call functions that have variable arguments.
To do so, we either need to compile with -mno-sse or enable SSE in
the VCPUs. As it should be pretty safe to turn on the feature, and
-mno-sse would make linking test code with standard libraries
difficult, we choose the feature enabling.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: LAPIC: Tune lapic_timer_advance_ns automatically
Wanpeng Li [Tue, 9 Oct 2018 01:02:08 +0000 (09:02 +0800)]
KVM: LAPIC: Tune lapic_timer_advance_ns automatically

In cloud environment, lapic_timer_advance_ns is needed to be tuned for every CPU
generations, and every host kernel versions(the kvm-unit-tests/tscdeadline_latency.flat
is 5700 cycles for upstream kernel and 9600 cycles for our 3.10 product kernel,
both preemption_timer=N, Skylake server).

This patch adds the capability to automatically tune lapic_timer_advance_ns
step by step, the initial value is 1000ns as 'commit d0659d946be0 ("KVM: x86:
add option to advance tscdeadline hrtimer expiration")' recommended, it will be
reduced when it is too early, and increased when it is too late. The guest_tsc
and tsc_deadline are hard to equal, so we assume we are done when the delta
is within a small scope e.g. 100 cycles. This patch reduces latency
(kvm-unit-tests/tscdeadline_latency, busy waits, preemption_timer enabled)
from ~2600 cyles to ~1200 cyles on our Skylake server.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Do not flush TLB on L1<->L2 transitions if L1 uses VPID and EPT
Liran Alon [Mon, 8 Oct 2018 20:42:20 +0000 (23:42 +0300)]
KVM: nVMX: Do not flush TLB on L1<->L2 transitions if L1 uses VPID and EPT

If L1 uses VPID, it expects TLB to not be flushed on L1<->L2
transitions. However, code currently flushes TLB nonetheless if we
didn't allocate a vpid02 for L2. As in this case,
vmcs02->vpid == vmcs01->vpid == vmx->vpid.

But, if L1 uses EPT, TLB entires populated by L2 are tagged with EPTP02
while TLB entries populated by L1 are tagged with EPTP01.
Therefore, we can also avoid TLB flush if L1 uses VPID and EPT.

Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Flush linear and combined mappings on VPID02 related flushes
Liran Alon [Mon, 8 Oct 2018 20:42:19 +0000 (23:42 +0300)]
KVM: nVMX: Flush linear and combined mappings on VPID02 related flushes

All VPID12s used on a given L1 vCPU is translated to a single
VPID02 (vmx->nested.vpid02 or vmx->vpid). Therefore, on L1->L2 VMEntry,
we need to invalidate linear and combined mappings tagged by
VPID02 in case L1 uses VPID and vmcs12->vpid was changed since
last L1->L2 VMEntry.

However, current code invalidates the wrong mappings as it calls
__vmx_flush_tlb() with invalidate_gpa parameter set to true which will
result in invalidating combined and guest-physical mappings tagged with
active EPTP which is EPTP01.

Similarly, INVVPID emulation have the exact same issue.

Fix both issues by just setting invalidate_gpa parameter to false which
will result in invalidating linear and combined mappings tagged with
given VPID02 as required.

Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Use correct VPID02 when emulating L1 INVVPID
Liran Alon [Mon, 8 Oct 2018 20:42:18 +0000 (23:42 +0300)]
KVM: nVMX: Use correct VPID02 when emulating L1 INVVPID

In case L0 didn't allocate vmx->nested.vpid02 for L2,
vmcs02->vpid is set to vmx->vpid.
Consider this case when emulating L1 INVVPID in L0.

Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Flush TLB entries tagged by dest EPTP on L1<->L2 transitions
Liran Alon [Mon, 8 Oct 2018 20:42:17 +0000 (23:42 +0300)]
KVM: nVMX: Flush TLB entries tagged by dest EPTP on L1<->L2 transitions

If L1 and L2 share VPID (because L1 don't use VPID or we haven't allocated
a vpid02), we need to flush TLB on L1<->L2 transitions.

Before this patch, this TLB flushing was done by vmx_flush_tlb().
If L0 use EPT, this will translate into INVEPT(active_eptp);
However, if L1 use EPT, in L1->L2 VMEntry, active EPTP is EPTP01 but
TLB entries populated by L2 are tagged with EPTP02.
Therefore we should delay vmx_flush_tlb() until active_eptp is EPTP02.

To achieve this, instead of directly calling vmx_flush_tlb() we request
it to be called by KVM_REQ_TLB_FLUSH which is evaluated after
KVM_REQ_LOAD_CR3 which sets the active_eptp to EPTP02 as required.

Similarly, on L2->L1 VMExit, active EPTP is EPTP02 but TLB entries
populated by L1 are tagged with EPTP01 and therefore we should delay
vmx_flush_tlb() until active_eptp is EPTP01.

Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: vmx: rename KVM_GUEST_CR0_MASK tp KVM_VM_CR0_ALWAYS_OFF
Sean Christopherson [Fri, 13 Jul 2018 15:42:30 +0000 (08:42 -0700)]
KVM: vmx: rename KVM_GUEST_CR0_MASK tp KVM_VM_CR0_ALWAYS_OFF

The KVM_GUEST_CR0_MASK macro tracks CR0 bits that are forced to zero
by the VMX architecture, i.e. CR0.{NW,CD} must always be zero in the
hardware CR0 post-VMXON.  Rename the macro to clarify its purpose,
be consistent with KVM_VM_CR0_ALWAYS_ON and avoid confusion with the
CR0_GUEST_HOST_MASK field.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoMerge tag 'kvm-s390-next-4.20-2' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Sat, 13 Oct 2018 10:00:26 +0000 (12:00 +0200)]
Merge tag 'kvm-s390-next-4.20-2' of git://git./linux/kernel/git/kvms390/linux into HEAD

KVM: s390/vfio-ap: Fixes and enhancements for vfio-ap

- add tracing
- fix a locking bug
- make local functions and data static

5 years agoMerge tag 'kvm-ppc-next-4.20-1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Wed, 10 Oct 2018 16:38:32 +0000 (18:38 +0200)]
Merge tag 'kvm-ppc-next-4.20-1' of git://git./linux/kernel/git/paulus/powerpc into HEAD

PPC KVM update for 4.20.

The major new feature here is nested HV KVM support.  This allows the
HV KVM module to load inside a radix guest on POWER9 and run radix
guests underneath it.  These nested guests can run in supervisor mode
and don't require any additional instructions to be emulated, unlike
with PR KVM, and so performance is much better than with PR KVM, and
is very close to the performance of a non-nested guest.  A nested
hypervisor (a guest with nested guests) can be migrated to another
host and will bring all its nested guests along with it.  A nested
guest can also itself run guests, and so on down to any desired depth
of nesting.

Apart from that there are a series of updates for IOMMU handling from
Alexey Kardashevskiy, a "one VM per core" mode for HV KVM for
security-paranoid applications, and a small fix for PR KVM.

5 years agoKVM: PPC: Book3S HV: Add NO_HASH flag to GET_SMMU_INFO ioctl result
Paul Mackerras [Mon, 8 Oct 2018 03:24:30 +0000 (14:24 +1100)]
KVM: PPC: Book3S HV: Add NO_HASH flag to GET_SMMU_INFO ioctl result

This adds a KVM_PPC_NO_HASH flag to the flags field of the
kvm_ppc_smmu_info struct, and arranges for it to be set when
running as a nested hypervisor, as an unambiguous indication
to userspace that HPT guests are not supported.  Reporting the
KVM_CAP_PPC_MMU_HASH_V3 capability as false could be taken as
indicating only that the new HPT features in ISA V3.0 are not
supported, leaving it ambiguous whether pre-V3.0 HPT features
are supported.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agoKVM: PPC: Book3S HV: Add a VM capability to enable nested virtualization
Paul Mackerras [Fri, 21 Sep 2018 10:02:01 +0000 (20:02 +1000)]
KVM: PPC: Book3S HV: Add a VM capability to enable nested virtualization

With this, userspace can enable a KVM-HV guest to run nested guests
under it.

The administrator can control whether any nested guests can be run;
setting the "nested" module parameter to false prevents any guests
becoming nested hypervisors (that is, any attempt to enable the nested
capability on a guest will fail).  Guests which are already nested
hypervisors will continue to be so.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agoMerge remote-tracking branch 'remotes/powerpc/topic/ppc-kvm' into kvm-ppc-next
Paul Mackerras [Tue, 9 Oct 2018 05:13:20 +0000 (16:13 +1100)]
Merge remote-tracking branch 'remotes/powerpc/topic/ppc-kvm' into kvm-ppc-next

This merges in the "ppc-kvm" topic branch of the powerpc tree to get a
series of commits that touch both general arch/powerpc code and KVM
code.  These commits will be merged both via the KVM tree and the
powerpc tree.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agoKVM: PPC: Book3S HV: Add nested shadow page tables to debugfs
Paul Mackerras [Mon, 8 Oct 2018 05:31:17 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Add nested shadow page tables to debugfs

This adds a list of valid shadow PTEs for each nested guest to
the 'radix' file for the guest in debugfs.  This can be useful for
debugging.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Allow HV module to load without hypervisor mode
Paul Mackerras [Mon, 8 Oct 2018 05:31:16 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Allow HV module to load without hypervisor mode

With this, the KVM-HV module can be loaded in a guest running under
KVM-HV, and if the hypervisor supports nested virtualization, this
guest can now act as a nested hypervisor and run nested guests.

This also adds some checks to inform userspace that HPT guests are not
supported by nested hypervisors (by returning false for the
KVM_CAP_PPC_MMU_HASH_V3 capability), and to prevent userspace from
configuring a guest to use HPT mode.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Handle differing endianness for H_ENTER_NESTED
Suraj Jitindar Singh [Mon, 8 Oct 2018 05:31:15 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Handle differing endianness for H_ENTER_NESTED

The hcall H_ENTER_NESTED takes two parameters: the address in L1 guest
memory of a hv_regs struct and the address of a pt_regs struct.  The
hcall requests the L0 hypervisor to use the register values in these
structs to run a L2 guest and to return the exit state of the L2 guest
in these structs.  These are in the endianness of the L1 guest, rather
than being always big-endian as is usually the case for PAPR
hypercalls.

This is convenient because it means that the L1 guest can pass the
address of the regs field in its kvm_vcpu_arch struct.  This also
improves performance slightly by avoiding the need for two copies of
the pt_regs struct.

When reading/writing these structures, this patch handles the case
where the endianness of the L1 guest differs from that of the L0
hypervisor, by byteswapping the structures after reading and before
writing them back.

Since all the fields of the pt_regs are of the same type, i.e.,
unsigned long, we treat it as an array of unsigned longs.  The fields
of struct hv_guest_state are not all the same, so its fields are
byteswapped individually.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Sanitise hv_regs on nested guest entry
Suraj Jitindar Singh [Mon, 8 Oct 2018 05:31:14 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Sanitise hv_regs on nested guest entry

restore_hv_regs() is used to copy the hv_regs L1 wants to set to run the
nested (L2) guest into the vcpu structure. We need to sanitise these
values to ensure we don't let the L1 guest hypervisor do things we don't
want it to.

We don't let data address watchpoints or completed instruction address
breakpoints be set to match in hypervisor state.

We also don't let L1 enable features in the hypervisor facility status
and control register (HFSCR) for L2 which we have disabled for L1. That
is L2 will get the subset of features which the L0 hypervisor has
enabled for L1 and the features L1 wants to enable for L2. This could
mean we give L1 a hypervisor facility unavailable interrupt for a
facility it thinks it has enabled, however it shouldn't have enabled a
facility it itself doesn't have for the L2 guest.

We sanitise the registers when copying in the L2 hv_regs. We don't need
to sanitise when copying back the L1 hv_regs since these shouldn't be
able to contain invalid values as they're just what was copied out.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Add one-reg interface to virtual PTCR register
Paul Mackerras [Mon, 8 Oct 2018 05:31:13 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Add one-reg interface to virtual PTCR register

This adds a one-reg register identifier which can be used to read and
set the virtual PTCR for the guest.  This register identifies the
address and size of the virtual partition table for the guest, which
contains information about the nested guests under this guest.

Migrating this value is the only extra requirement for migrating a
guest which has nested guests (assuming of course that the destination
host supports nested virtualization in the kvm-hv module).

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Don't access HFSCR, LPIDR or LPCR when running nested
Paul Mackerras [Mon, 8 Oct 2018 05:31:12 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Don't access HFSCR, LPIDR or LPCR when running nested

When running as a nested hypervisor, this avoids reading hypervisor
privileged registers (specifically HFSCR, LPIDR and LPCR) at startup;
instead reasonable default values are used.  This also avoids writing
LPIDR in the single-vcpu entry/exit path.

Also, this removes the check for CPU_FTR_HVMODE in kvmppc_mmu_hv_init()
since its only caller already checks this.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Invalidate TLB when nested vcpu moves physical cpu
Suraj Jitindar Singh [Mon, 8 Oct 2018 05:31:11 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Invalidate TLB when nested vcpu moves physical cpu

This is only done at level 0, since only level 0 knows which physical
CPU a vcpu is running on.  This does for nested guests what L0 already
did for its own guests, which is to flush the TLB on a pCPU when it
goes to run a vCPU there, and there is another vCPU in the same VM
which previously ran on this pCPU and has now started to run on another
pCPU.  This is to handle the situation where the other vCPU touched
a mapping, moved to another pCPU and did a tlbiel (local-only tlbie)
on that new pCPU and thus left behind a stale TLB entry on this pCPU.

This introduces a limit on the the vcpu_token values used in the
H_ENTER_NESTED hcall -- they must now be less than NR_CPUS.

[paulus@ozlabs.org - made prev_cpu array be short[] to reduce
 memory consumption.]

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Use hypercalls for TLB invalidation when nested
Paul Mackerras [Mon, 8 Oct 2018 05:31:10 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Use hypercalls for TLB invalidation when nested

This adds code to call the H_TLB_INVALIDATE hypercall when running as
a guest, in the cases where we need to invalidate TLBs (or other MMU
caches) as part of managing the mappings for a nested guest.  Calling
H_TLB_INVALIDATE lets the nested hypervisor inform the parent
hypervisor about changes to partition-scoped page tables or the
partition table without needing to do hypervisor-privileged tlbie
instructions.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Implement H_TLB_INVALIDATE hcall
Suraj Jitindar Singh [Mon, 8 Oct 2018 05:31:09 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Implement H_TLB_INVALIDATE hcall

When running a nested (L2) guest the guest (L1) hypervisor will use
the H_TLB_INVALIDATE hcall when it needs to change the partition
scoped page tables or the partition table which it manages.  It will
use this hcall in the situations where it would use a partition-scoped
tlbie instruction if it were running in hypervisor mode.

The H_TLB_INVALIDATE hcall can invalidate different scopes:

Invalidate TLB for a given target address:
- This invalidates a single L2 -> L1 pte
- We need to invalidate any L2 -> L0 shadow_pgtable ptes which map the L2
  address space which is being invalidated. This is because a single
  L2 -> L1 pte may have been mapped with more than one pte in the
  L2 -> L0 page tables.

Invalidate the entire TLB for a given LPID or for all LPIDs:
- Invalidate the entire shadow_pgtable for a given nested guest, or
  for all nested guests.

Invalidate the PWC (page walk cache) for a given LPID or for all LPIDs:
- We don't cache the PWC, so nothing to do.

Invalidate the entire TLB, PWC and partition table for a given/all LPIDs:
- Here we re-read the partition table entry and remove the nested state
  for any nested guest for which the first doubleword of the partition
  table entry is now zero.

The H_TLB_INVALIDATE hcall takes as parameters the tlbie instruction
word (of which only the RIC, PRS and R fields are used), the rS value
(giving the lpid, where required) and the rB value (giving the IS, AP
and EPN values).

[paulus@ozlabs.org - adapted to having the partition table in guest
memory, added the H_TLB_INVALIDATE implementation, removed tlbie
instruction emulation, reworded the commit message.]

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Introduce rmap to track nested guest mappings
Suraj Jitindar Singh [Mon, 8 Oct 2018 05:31:08 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Introduce rmap to track nested guest mappings

When a host (L0) page which is mapped into a (L1) guest is in turn
mapped through to a nested (L2) guest we keep a reverse mapping (rmap)
so that these mappings can be retrieved later.

Whenever we create an entry in a shadow_pgtable for a nested guest we
create a corresponding rmap entry and add it to the list for the
L1 guest memslot at the index of the L1 guest page it maps. This means
at the L1 guest memslot we end up with lists of rmaps.

When we are notified of a host page being invalidated which has been
mapped through to a (L1) guest, we can then walk the rmap list for that
guest page, and find and invalidate all of the corresponding
shadow_pgtable entries.

In order to reduce memory consumption, we compress the information for
each rmap entry down to 52 bits -- 12 bits for the LPID and 40 bits
for the guest real page frame number -- which will fit in a single
unsigned long.  To avoid a scenario where a guest can trigger
unbounded memory allocations, we scan the list when adding an entry to
see if there is already an entry with the contents we need.  This can
occur, because we don't ever remove entries from the middle of a list.

A struct nested guest rmap is a list pointer and an rmap entry;
----------------
| next pointer |
----------------
| rmap entry   |
----------------

Thus the rmap pointer for each guest frame number in the memslot can be
either NULL, a single entry, or a pointer to a list of nested rmap entries.

gfn  memslot rmap array
  -------------------------
 0 | NULL | (no rmap entry)
  -------------------------
 1 | single rmap entry | (rmap entry with low bit set)
  -------------------------
 2 | list head pointer | (list of rmap entries)
  -------------------------

The final entry always has the lowest bit set and is stored in the next
pointer of the last list entry, or as a single rmap entry.
With a list of rmap entries looking like;

----------------- ----------------- -------------------------
| list head ptr | ----> | next pointer | ----> | single rmap entry |
----------------- ----------------- -------------------------
| rmap entry | | rmap entry |
----------------- -------------------------

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Handle page fault for a nested guest
Suraj Jitindar Singh [Mon, 8 Oct 2018 05:31:07 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Handle page fault for a nested guest

Consider a normal (L1) guest running under the main hypervisor (L0),
and then a nested guest (L2) running under the L1 guest which is acting
as a nested hypervisor. L0 has page tables to map the address space for
L1 providing the translation from L1 real address -> L0 real address;

L1
|
| (L1 -> L0)
|
----> L0

There are also page tables in L1 used to map the address space for L2
providing the translation from L2 real address -> L1 read address. Since
the hardware can only walk a single level of page table, we need to
maintain in L0 a "shadow_pgtable" for L2 which provides the translation
from L2 real address -> L0 real address. Which looks like;

L2 L2
| |
| (L2 -> L1) |
| |
----> L1 | (L2 -> L0)
      | |
      | (L1 -> L0) |
      | |
      ----> L0 --------> L0

When a page fault occurs while running a nested (L2) guest we need to
insert a pte into this "shadow_pgtable" for the L2 -> L0 mapping. To
do this we need to:

1. Walk the pgtable in L1 memory to find the L2 -> L1 mapping, and
   provide a page fault to L1 if this mapping doesn't exist.
2. Use our L1 -> L0 pgtable to convert this L1 address to an L0 address,
   or try to insert a pte for that mapping if it doesn't exist.
3. Now we have a L2 -> L0 mapping, insert this into our shadow_pgtable

Once this mapping exists we can take rc faults when hardware is unable
to automatically set the reference and change bits in the pte. On these
we need to:

1. Check the rc bits on the L2 -> L1 pte match, and otherwise reflect
   the fault down to L1.
2. Set the rc bits in the L1 -> L0 pte which corresponds to the same
   host page.
3. Set the rc bits in the L2 -> L0 pte.

As we reuse a large number of functions in book3s_64_mmu_radix.c for
this we also needed to refactor a number of these functions to take
an lpid parameter so that the correct lpid is used for tlb invalidations.
The functionality however has remained the same.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Handle hypercalls correctly when nested
Paul Mackerras [Mon, 8 Oct 2018 05:31:06 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Handle hypercalls correctly when nested

When we are running as a nested hypervisor, we use a hypercall to
enter the guest rather than code in book3s_hv_rmhandlers.S.  This means
that the hypercall handlers listed in hcall_real_table never get called.
There are some hypercalls that are handled there and not in
kvmppc_pseries_do_hcall(), which therefore won't get processed for
a nested guest.

To fix this, we add cases to kvmppc_pseries_do_hcall() to handle those
hypercalls, with the following exceptions:

- The HPT hypercalls (H_ENTER, H_REMOVE, etc.) are not handled because
  we only support radix mode for nested guests.

- H_CEDE has to be handled specially because the cede logic in
  kvmhv_run_single_vcpu assumes that it has been processed by the time
  that kvmhv_p9_guest_entry() returns.  Therefore we put a special
  case for H_CEDE in kvmhv_p9_guest_entry().

For the XICS hypercalls, if real-mode processing is enabled, then the
virtual-mode handlers assume that they are being called only to finish
up the operation.  Therefore we turn off the real-mode flag in the XICS
code when running as a nested hypervisor.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Use XICS hypercalls when running as a nested hypervisor
Paul Mackerras [Mon, 8 Oct 2018 05:31:05 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Use XICS hypercalls when running as a nested hypervisor

This adds code to call the H_IPI and H_EOI hypercalls when we are
running as a nested hypervisor (i.e. without the CPU_FTR_HVMODE cpu
feature) and we would otherwise access the XICS interrupt controller
directly or via an OPAL call.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Nested guest entry via hypercall
Paul Mackerras [Mon, 8 Oct 2018 05:31:04 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Nested guest entry via hypercall

This adds a new hypercall, H_ENTER_NESTED, which is used by a nested
hypervisor to enter one of its nested guests.  The hypercall supplies
register values in two structs.  Those values are copied by the level 0
(L0) hypervisor (the one which is running in hypervisor mode) into the
vcpu struct of the L1 guest, and then the guest is run until an
interrupt or error occurs which needs to be reported to L1 via the
hypercall return value.

Currently this assumes that the L0 and L1 hypervisors are the same
endianness, and the structs passed as arguments are in native
endianness.  If they are of different endianness, the version number
check will fail and the hcall will be rejected.

Nested hypervisors do not support indep_threads_mode=N, so this adds
code to print a warning message if the administrator has set
indep_threads_mode=N, and treat it as Y.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Framework and hcall stubs for nested virtualization
Paul Mackerras [Mon, 8 Oct 2018 05:31:03 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Framework and hcall stubs for nested virtualization

This starts the process of adding the code to support nested HV-style
virtualization.  It defines a new H_SET_PARTITION_TABLE hypercall which
a nested hypervisor can use to set the base address and size of a
partition table in its memory (analogous to the PTCR register).
On the host (level 0 hypervisor) side, the H_SET_PARTITION_TABLE
hypercall from the guest is handled by code that saves the virtual
PTCR value for the guest.

This also adds code for creating and destroying nested guests and for
reading the partition table entry for a nested guest from L1 memory.
Each nested guest has its own shadow LPID value, different in general
from the LPID value used by the nested hypervisor to refer to it.  The
shadow LPID value is allocated at nested guest creation time.

Nested hypervisor functionality is only available for a radix guest,
which therefore means a radix host on a POWER9 (or later) processor.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Use kvmppc_unmap_pte() in kvm_unmap_radix()
Paul Mackerras [Mon, 8 Oct 2018 05:31:02 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Use kvmppc_unmap_pte() in kvm_unmap_radix()

kvmppc_unmap_pte() does a sequence of operations that are open-coded in
kvm_unmap_radix().  This extends kvmppc_unmap_pte() a little so that it
can be used by kvm_unmap_radix(), and makes kvm_unmap_radix() call it.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Refactor radix page fault handler
Suraj Jitindar Singh [Mon, 8 Oct 2018 05:31:01 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Refactor radix page fault handler

The radix page fault handler accounts for all cases, including just
needing to insert a pte.  This breaks it up into separate functions for
the two main cases; setting rc and inserting a pte.

This allows us to make the setting of rc and inserting of a pte
generic for any pgtable, not specific to the one for this guest.

[paulus@ozlabs.org - reduced diffs from previous code]

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Make kvmppc_mmu_radix_xlate process/partition table agnostic
Suraj Jitindar Singh [Mon, 8 Oct 2018 05:31:00 +0000 (16:31 +1100)]
KVM: PPC: Book3S HV: Make kvmppc_mmu_radix_xlate process/partition table agnostic

kvmppc_mmu_radix_xlate() is used to translate an effective address
through the process tables. The process table and partition tables have
identical layout. Exploit this fact to make the kvmppc_mmu_radix_xlate()
function able to translate either an effective address through the
process tables or a guest real address through the partition tables.

[paulus@ozlabs.org - reduced diffs from previous code]

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Clear partition table entry on vm teardown
Suraj Jitindar Singh [Mon, 8 Oct 2018 05:30:59 +0000 (16:30 +1100)]
KVM: PPC: Book3S HV: Clear partition table entry on vm teardown

When destroying a VM we return the LPID to the pool, however we never
zero the partition table entry. This is instead done when we reallocate
the LPID.

Zero the partition table entry on VM teardown before returning the LPID
to the pool. This means if we were running as a nested hypervisor the
real hypervisor could use this to determine when it can free resources.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Use ccr field in pt_regs struct embedded in vcpu struct
Paul Mackerras [Mon, 8 Oct 2018 05:30:58 +0000 (16:30 +1100)]
KVM: PPC: Use ccr field in pt_regs struct embedded in vcpu struct

When the 'regs' field was added to struct kvm_vcpu_arch, the code
was changed to use several of the fields inside regs (e.g., gpr, lr,
etc.) but not the ccr field, because the ccr field in struct pt_regs
is 64 bits on 64-bit platforms, but the cr field in kvm_vcpu_arch is
only 32 bits.  This changes the code to use the regs.ccr field
instead of cr, and changes the assembly code on 64-bit platforms to
use 64-bit loads and stores instead of 32-bit ones.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Add a debugfs file to dump radix mappings
Paul Mackerras [Mon, 8 Oct 2018 05:30:57 +0000 (16:30 +1100)]
KVM: PPC: Book3S HV: Add a debugfs file to dump radix mappings

This adds a file called 'radix' in the debugfs directory for the
guest, which when read gives all of the valid leaf PTEs in the
partition-scoped radix tree for a radix guest, in human-readable
format.  It is analogous to the existing 'htab' file which dumps
the HPT entries for a HPT guest.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Handle hypervisor instruction faults better
Paul Mackerras [Mon, 8 Oct 2018 05:30:56 +0000 (16:30 +1100)]
KVM: PPC: Book3S HV: Handle hypervisor instruction faults better

Currently the code for handling hypervisor instruction page faults
passes 0 for the flags indicating the type of fault, which is OK in
the usual case that the page is not mapped in the partition-scoped
page tables.  However, there are other causes for hypervisor
instruction page faults, such as not being to update a reference
(R) or change (C) bit.  The cause is indicated in bits in HSRR1,
including a bit which indicates that the fault is due to not being
able to write to a page (for example to update an R or C bit).
Not handling these other kinds of faults correctly can lead to a
loop of continual faults without forward progress in the guest.

In order to handle these faults better, this patch constructs a
"DSISR-like" value from the bits which DSISR and SRR1 (for a HISI)
have in common, and passes it to kvmppc_book3s_hv_page_fault() so
that it knows what caused the fault.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Streamlined guest entry/exit path on P9 for radix guests
Paul Mackerras [Mon, 8 Oct 2018 05:30:55 +0000 (16:30 +1100)]
KVM: PPC: Book3S HV: Streamlined guest entry/exit path on P9 for radix guests

This creates an alternative guest entry/exit path which is used for
radix guests on POWER9 systems when we have indep_threads_mode=Y.  In
these circumstances there is exactly one vcpu per vcore and there is
no coordination required between vcpus or vcores; the vcpu can enter
the guest without needing to synchronize with anything else.

The new fast path is implemented almost entirely in C in book3s_hv.c
and runs with the MMU on until the guest is entered.  On guest exit
we use the existing path until the point where we are committed to
exiting the guest (as distinct from handling an interrupt in the
low-level code and returning to the guest) and we have pulled the
guest context from the XIVE.  At that point we check a flag in the
stack frame to see whether we came in via the old path and the new
path; if we came in via the new path then we go back to C code to do
the rest of the process of saving the guest context and restoring the
host context.

The C code is split into separate functions for handling the
OS-accessible state and the hypervisor state, with the idea that the
latter can be replaced by a hypercall when we implement nested
virtualization.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[mpe: Fix CONFIG_ALTIVEC=n build]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Call kvmppc_handle_exit_hv() with vcore unlocked
Paul Mackerras [Mon, 8 Oct 2018 05:30:54 +0000 (16:30 +1100)]
KVM: PPC: Book3S HV: Call kvmppc_handle_exit_hv() with vcore unlocked

Currently kvmppc_handle_exit_hv() is called with the vcore lock held
because it is called within a for_each_runnable_thread loop.
However, we already unlock the vcore within kvmppc_handle_exit_hv()
under certain circumstances, and this is safe because (a) any vcpus
that become runnable and are added to the runnable set by
kvmppc_run_vcpu() have their vcpu->arch.trap == 0 and can't actually
run in the guest (because the vcore state is VCORE_EXITING), and
(b) for_each_runnable_thread is safe against addition or removal
of vcpus from the runnable set.

Therefore, in order to simplify things for following patches, let's
drop the vcore lock in the for_each_runnable_thread loop, so
kvmppc_handle_exit_hv() gets called without the vcore lock held.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S: Rework TM save/restore code and make it C-callable
Paul Mackerras [Mon, 8 Oct 2018 05:30:53 +0000 (16:30 +1100)]
KVM: PPC: Book3S: Rework TM save/restore code and make it C-callable

This adds a parameter to __kvmppc_save_tm and __kvmppc_restore_tm
which allows the caller to indicate whether it wants the nonvolatile
register state to be preserved across the call, as required by the C
calling conventions.  This parameter being non-zero also causes the
MSR bits that enable TM, FP, VMX and VSX to be preserved.  The
condition register and DSCR are now always preserved.

With this, kvmppc_save_tm_hv and kvmppc_restore_tm_hv can be called
from C code provided the 3rd parameter is non-zero.  So that these
functions can be called from modules, they now include code to set
the TOC pointer (r2) on entry, as they can call other built-in C
functions which will assume the TOC to have been set.

Also, the fake suspend code in kvmppc_save_tm_hv is modified here to
assume that treclaim in fake-suspend state does not modify any registers,
which is the case on POWER9.  This enables the code to be simplified
quite a bit.

_kvmppc_save_tm_pr and _kvmppc_restore_tm_pr become much simpler with
this change, since they now only need to save and restore TAR and pass
1 for the 3rd argument to __kvmppc_{save,restore}_tm.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Simplify real-mode interrupt handling
Paul Mackerras [Mon, 8 Oct 2018 05:30:52 +0000 (16:30 +1100)]
KVM: PPC: Book3S HV: Simplify real-mode interrupt handling

This streamlines the first part of the code that handles a hypervisor
interrupt that occurred in the guest.  With this, all of the real-mode
handling that occurs is done before the "guest_exit_cont" label; once
we get to that label we are committed to exiting to host virtual mode.
Thus the machine check and HMI real-mode handling is moved before that
label.

Also, the code to handle external interrupts is moved out of line, as
is the code that calls kvmppc_realmode_hmi_handler().

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Extract PMU save/restore operations as C-callable functions
Paul Mackerras [Mon, 8 Oct 2018 05:30:51 +0000 (16:30 +1100)]
KVM: PPC: Book3S HV: Extract PMU save/restore operations as C-callable functions

This pulls out the assembler code that is responsible for saving and
restoring the PMU state for the host and guest into separate functions
so they can be used from an alternate entry path.  The calling
convention is made compatible with C.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Move interrupt delivery on guest entry to C code
Paul Mackerras [Mon, 8 Oct 2018 05:30:50 +0000 (16:30 +1100)]
KVM: PPC: Book3S HV: Move interrupt delivery on guest entry to C code

This is based on a patch by Suraj Jitindar Singh.

This moves the code in book3s_hv_rmhandlers.S that generates an
external, decrementer or privileged doorbell interrupt just before
entering the guest to C code in book3s_hv_builtin.c.  This is to
make future maintenance and modification easier.  The algorithm
expressed in the C code is almost identical to the previous
algorithm.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S HV: Remove left-over code in XICS-on-XIVE emulation
Paul Mackerras [Mon, 8 Oct 2018 05:30:49 +0000 (16:30 +1100)]
KVM: PPC: Book3S HV: Remove left-over code in XICS-on-XIVE emulation

This removes code that clears the external interrupt pending bit in
the pending_exceptions bitmap.  This is left over from an earlier
iteration of the code where this bit was set when an escalation
interrupt arrived in order to wake the vcpu from cede.  Currently
we set the vcpu->arch.irq_pending flag instead for this purpose.
Therefore there is no need to do anything with the pending_exceptions
bitmap.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoKVM: PPC: Book3S: Simplify external interrupt handling
Paul Mackerras [Mon, 8 Oct 2018 05:30:48 +0000 (16:30 +1100)]
KVM: PPC: Book3S: Simplify external interrupt handling

Currently we use two bits in the vcpu pending_exceptions bitmap to
indicate that an external interrupt is pending for the guest, one
for "one-shot" interrupts that are cleared when delivered, and one
for interrupts that persist until cleared by an explicit action of
the OS (e.g. an acknowledge to an interrupt controller).  The
BOOK3S_IRQPRIO_EXTERNAL bit is used for one-shot interrupt requests
and BOOK3S_IRQPRIO_EXTERNAL_LEVEL is used for persisting interrupts.

In practice BOOK3S_IRQPRIO_EXTERNAL never gets used, because our
Book3S platforms generally, and pseries in particular, expect
external interrupt requests to persist until they are acknowledged
at the interrupt controller.  That combined with the confusion
introduced by having two bits for what is essentially the same thing
makes it attractive to simplify things by only using one bit.  This
patch does that.

With this patch there is only BOOK3S_IRQPRIO_EXTERNAL, and by default
it has the semantics of a persisting interrupt.  In order to avoid
breaking the ABI, we introduce a new "external_oneshot" flag which
preserves the behaviour of the KVM_INTERRUPT ioctl with the
KVM_INTERRUPT_SET argument.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc: Turn off CPU_FTR_P9_TM_HV_ASSIST in non-hypervisor mode
Paul Mackerras [Mon, 8 Oct 2018 05:30:47 +0000 (16:30 +1100)]
powerpc: Turn off CPU_FTR_P9_TM_HV_ASSIST in non-hypervisor mode

When doing nested virtualization, it is only necessary to do the
transactional memory hypervisor assist at level 0, that is, when
we are in hypervisor mode.  Nested hypervisors can just use the TM
facilities as architected.  Therefore we should clear the
CPU_FTR_P9_TM_HV_ASSIST bit when we are not in hypervisor mode,
along with the CPU_FTR_HVMODE bit.

Doing this will not change anything at this stage because the only
code that tests CPU_FTR_P9_TM_HV_ASSIST is in HV KVM, which currently
can only be used when when CPU_FTR_HVMODE is set.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>