Skip to content

Commit 256e341

Browse files
committed
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull x86 kvm updates from Paolo Bonzini: "Generic: - Rework almost all of KVM's exports to expose symbols only to KVM's x86 vendor modules (kvm-{amd,intel}.ko and PPC's kvm-{pr,hv}.ko x86: - Rework almost all of KVM x86's exports to expose symbols only to KVM's vendor modules, i.e. to kvm-{amd,intel}.ko - Add support for virtualizing Control-flow Enforcement Technology (CET) on Intel (Shadow Stacks and Indirect Branch Tracking) and AMD (Shadow Stacks). It is worth noting that while SHSTK and IBT can be enabled separately in CPUID, it is not really possible to virtualize them separately. Therefore, Intel processors will really allow both SHSTK and IBT under the hood if either is made visible in the guest's CPUID. The alternative would be to intercept XSAVES/XRSTORS, which is not feasible for performance reasons - Fix a variety of fuzzing WARNs all caused by checking L1 intercepts when completing userspace I/O. KVM has already committed to allowing L2 to to perform I/O at that point - Emulate PERF_CNTR_GLOBAL_STATUS_SET for PerfMonV2 guests, as the MSR is supposed to exist for v2 PMUs - Allow Centaur CPU leaves (base 0xC000_0000) for Zhaoxin CPUs - Add support for the immediate forms of RDMSR and WRMSRNS, sans full emulator support (KVM should never need to emulate the MSRs outside of forced emulation and other contrived testing scenarios) - Clean up the MSR APIs in preparation for CET and FRED virtualization, as well as mediated vPMU support - Clean up a pile of PMU code in anticipation of adding support for mediated vPMUs - Reject in-kernel IOAPIC/PIT for TDX VMs, as KVM can't obtain EOI vmexits needed to faithfully emulate an I/O APIC for such guests - Many cleanups and minor fixes - Recover possible NX huge pages within the TDP MMU under read lock to reduce guest jitter when restoring NX huge pages - Return -EAGAIN during prefault if userspace concurrently deletes/moves the relevant memslot, to fix an issue where prefaulting could deadlock with the memslot update x86 (AMD): - Enable AVIC by default for Zen4+ if x2AVIC (and other prereqs) is supported - Require a minimum GHCB version of 2 when starting SEV-SNP guests via KVM_SEV_INIT2 so that invalid GHCB versions result in immediate errors instead of latent guest failures - Add support for SEV-SNP's CipherText Hiding, an opt-in feature that prevents unauthorized CPU accesses from reading the ciphertext of SNP guest private memory, e.g. to attempt an offline attack. This feature splits the shared SEV-ES/SEV-SNP ASID space into separate ranges for SEV-ES and SEV-SNP guests, therefore a new module parameter is needed to control the number of ASIDs that can be used for VMs with CipherText Hiding vs. how many can be used to run SEV-ES guests - Add support for Secure TSC for SEV-SNP guests, which prevents the untrusted host from tampering with the guest's TSC frequency, while still allowing the the VMM to configure the guest's TSC frequency prior to launch - Validate the XCR0 provided by the guest (via the GHCB) to avoid bugs resulting from bogus XCR0 values - Save an SEV guest's policy if and only if LAUNCH_START fully succeeds to avoid leaving behind stale state (thankfully not consumed in KVM) - Explicitly reject non-positive effective lengths during SNP's LAUNCH_UPDATE instead of subtly relying on guest_memfd to deal with them - Reload the pre-VMRUN TSC_AUX on #VMEXIT for SEV-ES guests, not the host's desired TSC_AUX, to fix a bug where KVM was keeping a different vCPU's TSC_AUX in the host MSR until return to userspace KVM (Intel): - Preparation for FRED support - Don't retry in TDX's anti-zero-step mitigation if the target memslot is invalid, i.e. is being deleted or moved, to fix a deadlock scenario similar to the aforementioned prefaulting case - Misc bugfixes and minor cleanups" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (142 commits) KVM: x86: Export KVM-internal symbols for sub-modules only KVM: x86: Drop pointless exports of kvm_arch_xxx() hooks KVM: x86: Move kvm_intr_is_single_vcpu() to lapic.c KVM: Export KVM-internal symbols for sub-modules only KVM: s390/vfio-ap: Use kvm_is_gpa_in_memslot() instead of open coded equivalent KVM: VMX: Make CR4.CET a guest owned bit KVM: selftests: Verify MSRs are (not) in save/restore list when (un)supported KVM: selftests: Add coverage for KVM-defined registers in MSRs test KVM: selftests: Add KVM_{G,S}ET_ONE_REG coverage to MSRs test KVM: selftests: Extend MSRs test to validate vCPUs without supported features KVM: selftests: Add support for MSR_IA32_{S,U}_CET to MSRs test KVM: selftests: Add an MSR test to exercise guest/host and read/write KVM: x86: Define AMD's #HV, #VC, and #SX exception vectors KVM: x86: Define Control Protection Exception (#CP) vector KVM: x86: Add human friendly formatting for #XM, and #VE KVM: SVM: Enable shadow stack virtualization for SVM KVM: SEV: Synchronize MSR_IA32_XSS from the GHCB when it's valid KVM: SVM: Pass through shadow stack MSRs as appropriate KVM: SVM: Update dump_vmcb with shadow stack save area additions KVM: nSVM: Save/load CET Shadow Stack state to/from vmcb12/vmcb02 ...
2 parents fb5bc34 + 6b36119 commit 256e341

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

71 files changed

+3193
-1224
lines changed

Documentation/admin-guide/kernel-parameters.txt

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2962,6 +2962,27 @@
29622962
(enabled). Disable by KVM if hardware lacks support
29632963
for NPT.
29642964

2965+
kvm-amd.ciphertext_hiding_asids=
2966+
[KVM,AMD] Ciphertext hiding prevents disallowed accesses
2967+
to SNP private memory from reading ciphertext. Instead,
2968+
reads will see constant default values (0xff).
2969+
2970+
If ciphertext hiding is enabled, the joint SEV-ES and
2971+
SEV-SNP ASID space is partitioned into separate SEV-ES
2972+
and SEV-SNP ASID ranges, with the SEV-SNP range being
2973+
[1..max_snp_asid] and the SEV-ES range being
2974+
(max_snp_asid..min_sev_asid), where min_sev_asid is
2975+
enumerated by CPUID.0x.8000_001F[EDX].
2976+
2977+
A non-zero value enables SEV-SNP ciphertext hiding and
2978+
adjusts the ASID ranges for SEV-ES and SEV-SNP guests.
2979+
KVM caps the number of SEV-SNP ASIDs at the maximum
2980+
possible value, e.g. specifying -1u will assign all
2981+
joint SEV-ES and SEV-SNP ASIDs to SEV-SNP. Note,
2982+
assigning all joint ASIDs to SEV-SNP, i.e. configuring
2983+
max_snp_asid == min_sev_asid-1, will effectively make
2984+
SEV-ES unusable.
2985+
29652986
kvm-arm.mode=
29662987
[KVM,ARM,EARLY] Select one of KVM/arm64's modes of
29672988
operation.

Documentation/virt/kvm/api.rst

Lines changed: 19 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2908,6 +2908,16 @@ such as set vcpu counter or reset vcpu, and they have the following id bit patte
29082908

29092909
0x9030 0000 0002 <reg:16>
29102910

2911+
x86 MSR registers have the following id bit patterns::
2912+
0x2030 0002 <msr number:32>
2913+
2914+
Following are the KVM-defined registers for x86:
2915+
2916+
======================= ========= =============================================
2917+
Encoding Register Description
2918+
======================= ========= =============================================
2919+
0x2030 0003 0000 0000 SSP Shadow Stack Pointer
2920+
======================= ========= =============================================
29112921

29122922
4.69 KVM_GET_ONE_REG
29132923
--------------------
@@ -3075,6 +3085,12 @@ This IOCTL replaces the obsolete KVM_GET_PIT.
30753085
Sets the state of the in-kernel PIT model. Only valid after KVM_CREATE_PIT2.
30763086
See KVM_GET_PIT2 for details on struct kvm_pit_state2.
30773087

3088+
.. Tip::
3089+
``KVM_SET_PIT2`` strictly adheres to the spec of Intel 8254 PIT. For example,
3090+
a ``count`` value of 0 in ``struct kvm_pit_channel_state`` is interpreted as
3091+
65536, which is the maximum count value. Refer to `Intel 8254 programmable
3092+
interval timer <https://www.scs.stanford.edu/10wi-cs140/pintos/specs/8254.pdf>`_.
3093+
30783094
This IOCTL replaces the obsolete KVM_SET_PIT.
30793095

30803096

@@ -3582,7 +3598,7 @@ VCPU matching underlying host.
35823598
---------------------
35833599

35843600
:Capability: basic
3585-
:Architectures: arm64, mips, riscv
3601+
:Architectures: arm64, mips, riscv, x86 (if KVM_CAP_ONE_REG)
35863602
:Type: vcpu ioctl
35873603
:Parameters: struct kvm_reg_list (in/out)
35883604
:Returns: 0 on success; -1 on error
@@ -3625,6 +3641,8 @@ Note that s390 does not support KVM_GET_REG_LIST for historical reasons
36253641

36263642
- KVM_REG_S390_GBEA
36273643

3644+
Note, for x86, all MSRs enumerated by KVM_GET_MSR_INDEX_LIST are supported as
3645+
type KVM_X86_REG_TYPE_MSR, but are NOT enumerated via KVM_GET_REG_LIST.
36283646

36293647
4.85 KVM_ARM_SET_DEVICE_ADDR (deprecated)
36303648
-----------------------------------------

Documentation/virt/kvm/x86/hypercalls.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ compute the CLOCK_REALTIME for its clock, at the same instant.
137137
Returns KVM_EOPNOTSUPP if the host does not use TSC clocksource,
138138
or if clock type is different than KVM_CLOCK_PAIRING_WALLCLOCK.
139139

140-
6. KVM_HC_SEND_IPI
140+
7. KVM_HC_SEND_IPI
141141
------------------
142142

143143
:Architecture: x86
@@ -158,7 +158,7 @@ corresponds to the APIC ID a2+1, and so on.
158158

159159
Returns the number of CPUs to which the IPIs were delivered successfully.
160160

161-
7. KVM_HC_SCHED_YIELD
161+
8. KVM_HC_SCHED_YIELD
162162
---------------------
163163

164164
:Architecture: x86
@@ -170,7 +170,7 @@ a0: destination APIC ID
170170
:Usage example: When sending a call-function IPI-many to vCPUs, yield if
171171
any of the IPI target vCPUs was preempted.
172172

173-
8. KVM_HC_MAP_GPA_RANGE
173+
9. KVM_HC_MAP_GPA_RANGE
174174
-------------------------
175175
:Architecture: x86
176176
:Status: active

arch/powerpc/include/asm/Kbuild

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,6 @@ generated-y += syscall_table_32.h
33
generated-y += syscall_table_64.h
44
generated-y += syscall_table_spu.h
55
generic-y += agp.h
6-
generic-y += kvm_types.h
76
generic-y += mcs_spinlock.h
87
generic-y += qrwlock.h
98
generic-y += early_ioremap.h
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
/* SPDX-License-Identifier: GPL-2.0 */
2+
#ifndef _ASM_PPC_KVM_TYPES_H
3+
#define _ASM_PPC_KVM_TYPES_H
4+
5+
#if IS_MODULE(CONFIG_KVM_BOOK3S_64_PR) && IS_MODULE(CONFIG_KVM_BOOK3S_64_HV)
6+
#define KVM_SUB_MODULES kvm-pr,kvm-hv
7+
#elif IS_MODULE(CONFIG_KVM_BOOK3S_64_PR)
8+
#define KVM_SUB_MODULES kvm-pr
9+
#elif IS_MODULE(CONFIG_KVM_BOOK3S_64_HV)
10+
#define KVM_SUB_MODULES kvm-hv
11+
#else
12+
#undef KVM_SUB_MODULES
13+
#endif
14+
15+
#endif

arch/s390/include/asm/kvm_host.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -722,6 +722,8 @@ extern int kvm_s390_enter_exit_sie(struct kvm_s390_sie_block *scb,
722722
extern int kvm_s390_gisc_register(struct kvm *kvm, u32 gisc);
723723
extern int kvm_s390_gisc_unregister(struct kvm *kvm, u32 gisc);
724724

725+
bool kvm_s390_is_gpa_in_memslot(struct kvm *kvm, gpa_t gpa);
726+
725727
static inline void kvm_arch_free_memslot(struct kvm *kvm,
726728
struct kvm_memory_slot *slot) {}
727729
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}

arch/s390/kvm/priv.c

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -605,6 +605,14 @@ static int handle_io_inst(struct kvm_vcpu *vcpu)
605605
}
606606
}
607607

608+
#if IS_ENABLED(CONFIG_VFIO_AP)
609+
bool kvm_s390_is_gpa_in_memslot(struct kvm *kvm, gpa_t gpa)
610+
{
611+
return kvm_is_gpa_in_memslot(kvm, gpa);
612+
}
613+
EXPORT_SYMBOL_FOR_MODULES(kvm_s390_is_gpa_in_memslot, "vfio_ap");
614+
#endif
615+
608616
/*
609617
* handle_pqap: Handling pqap interception
610618
* @vcpu: the vcpu having issue the pqap instruction

arch/x86/include/asm/cpufeatures.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -444,6 +444,7 @@
444444
#define X86_FEATURE_VM_PAGE_FLUSH (19*32+ 2) /* VM Page Flush MSR is supported */
445445
#define X86_FEATURE_SEV_ES (19*32+ 3) /* "sev_es" Secure Encrypted Virtualization - Encrypted State */
446446
#define X86_FEATURE_SEV_SNP (19*32+ 4) /* "sev_snp" Secure Encrypted Virtualization - Secure Nested Paging */
447+
#define X86_FEATURE_SNP_SECURE_TSC (19*32+ 8) /* SEV-SNP Secure TSC */
447448
#define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* Virtual TSC_AUX */
448449
#define X86_FEATURE_SME_COHERENT (19*32+10) /* hardware-enforced cache coherency */
449450
#define X86_FEATURE_DEBUG_SWAP (19*32+14) /* "debug_swap" SEV-ES full debug state swap support */
@@ -497,6 +498,7 @@
497498
#define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */
498499
#define X86_FEATURE_IBPB_EXIT_TO_USER (21*32+14) /* Use IBPB on exit-to-userspace, see VMSCAPE bug */
499500
#define X86_FEATURE_ABMC (21*32+15) /* Assignable Bandwidth Monitoring Counters */
501+
#define X86_FEATURE_MSR_IMM (21*32+16) /* MSR immediate form instructions */
500502

501503
/*
502504
* BUG word(s)

arch/x86/include/asm/kvm-x86-ops.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ KVM_X86_OP(check_emulate_instruction)
138138
KVM_X86_OP(apic_init_signal_blocked)
139139
KVM_X86_OP_OPTIONAL(enable_l2_tlb_flush)
140140
KVM_X86_OP_OPTIONAL(migrate_timers)
141-
KVM_X86_OP(recalc_msr_intercepts)
141+
KVM_X86_OP(recalc_intercepts)
142142
KVM_X86_OP(complete_emulated_msr)
143143
KVM_X86_OP(vcpu_deliver_sipi_vector)
144144
KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);

arch/x86/include/asm/kvm_host.h

Lines changed: 52 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@
120120
#define KVM_REQ_TLB_FLUSH_GUEST \
121121
KVM_ARCH_REQ_FLAGS(27, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
122122
#define KVM_REQ_APF_READY KVM_ARCH_REQ(28)
123-
#define KVM_REQ_MSR_FILTER_CHANGED KVM_ARCH_REQ(29)
123+
#define KVM_REQ_RECALC_INTERCEPTS KVM_ARCH_REQ(29)
124124
#define KVM_REQ_UPDATE_CPU_DIRTY_LOGGING \
125125
KVM_ARCH_REQ_FLAGS(30, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
126126
#define KVM_REQ_MMU_FREE_OBSOLETE_ROOTS \
@@ -142,7 +142,7 @@
142142
| X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
143143
| X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
144144
| X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \
145-
| X86_CR4_LAM_SUP))
145+
| X86_CR4_LAM_SUP | X86_CR4_CET))
146146

147147
#define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
148148

@@ -267,6 +267,7 @@ enum x86_intercept_stage;
267267
#define PFERR_RSVD_MASK BIT(3)
268268
#define PFERR_FETCH_MASK BIT(4)
269269
#define PFERR_PK_MASK BIT(5)
270+
#define PFERR_SS_MASK BIT(6)
270271
#define PFERR_SGX_MASK BIT(15)
271272
#define PFERR_GUEST_RMP_MASK BIT_ULL(31)
272273
#define PFERR_GUEST_FINAL_MASK BIT_ULL(32)
@@ -545,10 +546,10 @@ struct kvm_pmc {
545546
#define KVM_MAX_NR_GP_COUNTERS KVM_MAX(KVM_MAX_NR_INTEL_GP_COUNTERS, \
546547
KVM_MAX_NR_AMD_GP_COUNTERS)
547548

548-
#define KVM_MAX_NR_INTEL_FIXED_COUTNERS 3
549-
#define KVM_MAX_NR_AMD_FIXED_COUTNERS 0
550-
#define KVM_MAX_NR_FIXED_COUNTERS KVM_MAX(KVM_MAX_NR_INTEL_FIXED_COUTNERS, \
551-
KVM_MAX_NR_AMD_FIXED_COUTNERS)
549+
#define KVM_MAX_NR_INTEL_FIXED_COUNTERS 3
550+
#define KVM_MAX_NR_AMD_FIXED_COUNTERS 0
551+
#define KVM_MAX_NR_FIXED_COUNTERS KVM_MAX(KVM_MAX_NR_INTEL_FIXED_COUNTERS, \
552+
KVM_MAX_NR_AMD_FIXED_COUNTERS)
552553

553554
struct kvm_pmu {
554555
u8 version;
@@ -579,6 +580,9 @@ struct kvm_pmu {
579580
DECLARE_BITMAP(all_valid_pmc_idx, X86_PMC_IDX_MAX);
580581
DECLARE_BITMAP(pmc_in_use, X86_PMC_IDX_MAX);
581582

583+
DECLARE_BITMAP(pmc_counting_instructions, X86_PMC_IDX_MAX);
584+
DECLARE_BITMAP(pmc_counting_branches, X86_PMC_IDX_MAX);
585+
582586
u64 ds_area;
583587
u64 pebs_enable;
584588
u64 pebs_enable_rsvd;
@@ -771,6 +775,7 @@ enum kvm_only_cpuid_leafs {
771775
CPUID_7_2_EDX,
772776
CPUID_24_0_EBX,
773777
CPUID_8000_0021_ECX,
778+
CPUID_7_1_ECX,
774779
NR_KVM_CPU_CAPS,
775780

776781
NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS,
@@ -811,7 +816,6 @@ struct kvm_vcpu_arch {
811816
bool at_instruction_boundary;
812817
bool tpr_access_reporting;
813818
bool xfd_no_write_intercept;
814-
u64 ia32_xss;
815819
u64 microcode_version;
816820
u64 arch_capabilities;
817821
u64 perf_capabilities;
@@ -872,6 +876,8 @@ struct kvm_vcpu_arch {
872876

873877
u64 xcr0;
874878
u64 guest_supported_xcr0;
879+
u64 ia32_xss;
880+
u64 guest_supported_xss;
875881

876882
struct kvm_pio_request pio;
877883
void *pio_data;
@@ -926,6 +932,7 @@ struct kvm_vcpu_arch {
926932
bool emulate_regs_need_sync_from_vcpu;
927933
int (*complete_userspace_io)(struct kvm_vcpu *vcpu);
928934
unsigned long cui_linear_rip;
935+
int cui_rdmsr_imm_reg;
929936

930937
gpa_t time;
931938
s8 pvclock_tsc_shift;
@@ -1348,6 +1355,30 @@ enum kvm_apicv_inhibit {
13481355
__APICV_INHIBIT_REASON(LOGICAL_ID_ALIASED), \
13491356
__APICV_INHIBIT_REASON(PHYSICAL_ID_TOO_BIG)
13501357

1358+
struct kvm_possible_nx_huge_pages {
1359+
/*
1360+
* A list of kvm_mmu_page structs that, if zapped, could possibly be
1361+
* replaced by an NX huge page. A shadow page is on this list if its
1362+
* existence disallows an NX huge page (nx_huge_page_disallowed is set)
1363+
* and there are no other conditions that prevent a huge page, e.g.
1364+
* the backing host page is huge, dirtly logging is not enabled for its
1365+
* memslot, etc... Note, zapping shadow pages on this list doesn't
1366+
* guarantee an NX huge page will be created in its stead, e.g. if the
1367+
* guest attempts to execute from the region then KVM obviously can't
1368+
* create an NX huge page (without hanging the guest).
1369+
*/
1370+
struct list_head pages;
1371+
u64 nr_pages;
1372+
};
1373+
1374+
enum kvm_mmu_type {
1375+
KVM_SHADOW_MMU,
1376+
#ifdef CONFIG_X86_64
1377+
KVM_TDP_MMU,
1378+
#endif
1379+
KVM_NR_MMU_TYPES,
1380+
};
1381+
13511382
struct kvm_arch {
13521383
unsigned long n_used_mmu_pages;
13531384
unsigned long n_requested_mmu_pages;
@@ -1357,21 +1388,11 @@ struct kvm_arch {
13571388
u8 vm_type;
13581389
bool has_private_mem;
13591390
bool has_protected_state;
1391+
bool has_protected_eoi;
13601392
bool pre_fault_allowed;
13611393
struct hlist_head *mmu_page_hash;
13621394
struct list_head active_mmu_pages;
1363-
/*
1364-
* A list of kvm_mmu_page structs that, if zapped, could possibly be
1365-
* replaced by an NX huge page. A shadow page is on this list if its
1366-
* existence disallows an NX huge page (nx_huge_page_disallowed is set)
1367-
* and there are no other conditions that prevent a huge page, e.g.
1368-
* the backing host page is huge, dirtly logging is not enabled for its
1369-
* memslot, etc... Note, zapping shadow pages on this list doesn't
1370-
* guarantee an NX huge page will be created in its stead, e.g. if the
1371-
* guest attempts to execute from the region then KVM obviously can't
1372-
* create an NX huge page (without hanging the guest).
1373-
*/
1374-
struct list_head possible_nx_huge_pages;
1395+
struct kvm_possible_nx_huge_pages possible_nx_huge_pages[KVM_NR_MMU_TYPES];
13751396
#ifdef CONFIG_KVM_EXTERNAL_WRITE_TRACKING
13761397
struct kvm_page_track_notifier_head track_notifier_head;
13771398
#endif
@@ -1526,7 +1547,7 @@ struct kvm_arch {
15261547
* is held in read mode:
15271548
* - tdp_mmu_roots (above)
15281549
* - the link field of kvm_mmu_page structs used by the TDP MMU
1529-
* - possible_nx_huge_pages;
1550+
* - possible_nx_huge_pages[KVM_TDP_MMU];
15301551
* - the possible_nx_huge_page_link field of kvm_mmu_page structs used
15311552
* by the TDP MMU
15321553
* Because the lock is only taken within the MMU lock, strictly
@@ -1908,7 +1929,7 @@ struct kvm_x86_ops {
19081929
int (*enable_l2_tlb_flush)(struct kvm_vcpu *vcpu);
19091930

19101931
void (*migrate_timers)(struct kvm_vcpu *vcpu);
1911-
void (*recalc_msr_intercepts)(struct kvm_vcpu *vcpu);
1932+
void (*recalc_intercepts)(struct kvm_vcpu *vcpu);
19121933
int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err);
19131934

19141935
void (*vcpu_deliver_sipi_vector)(struct kvm_vcpu *vcpu, u8 vector);
@@ -2149,13 +2170,16 @@ void kvm_prepare_event_vectoring_exit(struct kvm_vcpu *vcpu, gpa_t gpa);
21492170

21502171
void kvm_enable_efer_bits(u64);
21512172
bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer);
2152-
int kvm_get_msr_with_filter(struct kvm_vcpu *vcpu, u32 index, u64 *data);
2153-
int kvm_set_msr_with_filter(struct kvm_vcpu *vcpu, u32 index, u64 data);
2154-
int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data, bool host_initiated);
2155-
int kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data);
2156-
int kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data);
2173+
int kvm_emulate_msr_read(struct kvm_vcpu *vcpu, u32 index, u64 *data);
2174+
int kvm_emulate_msr_write(struct kvm_vcpu *vcpu, u32 index, u64 data);
2175+
int __kvm_emulate_msr_read(struct kvm_vcpu *vcpu, u32 index, u64 *data);
2176+
int __kvm_emulate_msr_write(struct kvm_vcpu *vcpu, u32 index, u64 data);
2177+
int kvm_msr_read(struct kvm_vcpu *vcpu, u32 index, u64 *data);
2178+
int kvm_msr_write(struct kvm_vcpu *vcpu, u32 index, u64 data);
21572179
int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu);
2180+
int kvm_emulate_rdmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg);
21582181
int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu);
2182+
int kvm_emulate_wrmsr_imm(struct kvm_vcpu *vcpu, u32 msr, int reg);
21592183
int kvm_emulate_as_nop(struct kvm_vcpu *vcpu);
21602184
int kvm_emulate_invd(struct kvm_vcpu *vcpu);
21612185
int kvm_emulate_mwait(struct kvm_vcpu *vcpu);
@@ -2187,6 +2211,7 @@ int kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val);
21872211
unsigned long kvm_get_dr(struct kvm_vcpu *vcpu, int dr);
21882212
unsigned long kvm_get_cr8(struct kvm_vcpu *vcpu);
21892213
void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw);
2214+
int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr);
21902215
int kvm_emulate_xsetbv(struct kvm_vcpu *vcpu);
21912216

21922217
int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr);
@@ -2354,6 +2379,7 @@ int kvm_add_user_return_msr(u32 msr);
23542379
int kvm_find_user_return_msr(u32 msr);
23552380
int kvm_set_user_return_msr(unsigned index, u64 val, u64 mask);
23562381
void kvm_user_return_msr_update_cache(unsigned int index, u64 val);
2382+
u64 kvm_get_user_return_msr(unsigned int slot);
23572383

23582384
static inline bool kvm_is_supported_user_return_msr(u32 msr)
23592385
{
@@ -2390,9 +2416,6 @@ void __user *__x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa,
23902416
bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu);
23912417
bool kvm_vcpu_is_bsp(struct kvm_vcpu *vcpu);
23922418

2393-
bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq,
2394-
struct kvm_vcpu **dest_vcpu);
2395-
23962419
static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq)
23972420
{
23982421
/* We can only post Fixed and LowPrio IRQs */

0 commit comments

Comments
 (0)