From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16B31C433B4 for ; Mon, 5 Apr 2021 20:43:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E3941611BE for ; Mon, 5 Apr 2021 20:43:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240681AbhDEUn3 (ORCPT ); Mon, 5 Apr 2021 16:43:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237190AbhDEUn0 (ORCPT ); Mon, 5 Apr 2021 16:43:26 -0400 Received: from mail-io1-xd2a.google.com (mail-io1-xd2a.google.com [IPv6:2607:f8b0:4864:20::d2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3CDAC061756 for ; Mon, 5 Apr 2021 13:43:18 -0700 (PDT) Received: by mail-io1-xd2a.google.com with SMTP id x17so13249038iog.2 for ; Mon, 05 Apr 2021 13:43:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=NHI+xqVLK4P6n38OCQkQ71p7tX2h5LIjFaJxt/ywqAg=; b=QDoJDXFcXsGFtpr2QfWzapOPvj3qJp7sdt8z8WaT9c2hyMzs3om0SfT8iB2vhH3CoS uZXJYtmr53CRHh/P9NFM0rrjOQlu7NSWXq6oSYGLd1jzijfEp3DbwebDY5FjGD9wRNpJ Me+7RNcvmoCUW6vdHocDjlIdYX8O9Zqng2jowTcoSJhfhIvFD+1dkjt2awtOK29Wb/VK a7htutSZ9CJvQcmy1dGe5HK0hd85nYnOgLpr3hlH/vk99MzSXPqEtH9VrjsNOJIjz7T5 QVegaxE8XauiWVHWbSIUr9vQtOmR9+GvD90/i/DASmD/ty7mVOq3L/R2LeTPNnRWTEFx Cpyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=NHI+xqVLK4P6n38OCQkQ71p7tX2h5LIjFaJxt/ywqAg=; b=nl4jrZM7JaKLHfKijfpjRwbOjdIDz3Nq6wMmM6Ko5wEaMms4/+A91+6v92pnyqHbwm 0ZNO7vrR3ihjYEWWjKqogz21kEuBA4cr7sLaU5tI0vguZNPbQNcVW9ifXtBIOECi6pEv lRq/Yf2S8WBKMUWx/mKzx4b9GULCcpBcXDJMQM0Z10AfPwsY/5CyfallRJQQSwM1TLcR ScSkMG2+p5xqpR/bK0z9EQdTrcCMUCs3uzylzafKmd3XDh0qNmvlcRuUHnEB6+VnQSKh 6VRT4GgruQsRLAWZyHfPB/a2JYryRMT1w5JjvDeIGqapma4JCDa6dwcmdrXR6bgtZ3tu +JDQ== X-Gm-Message-State: AOAM530XXUql3lM4ZDocrSBr/n2xOxyCP/YXLt/s1+XJWcQbBJaUrO4I rqK5nU45a8HLzXMfqb7YA6hoY1Kppx5iIEBqfIfgVg== X-Google-Smtp-Source: ABdhPJz6j0NEV6Exhmgjdz5KYswxalLHNoIEjFX2w5h3qnx5aeRqcblXA2aADyITeiVQb4Fk9lhZ6XBMHdrZboUM8BM= X-Received: by 2002:a02:c6c4:: with SMTP id r4mr24939912jan.77.1617655398273; Mon, 05 Apr 2021 13:43:18 -0700 (PDT) MIME-Version: 1.0 References: <4da0d40c309a21ba3952d06f346b6411930729c9.1617302792.git.ashish.kalra@amd.com> In-Reply-To: <4da0d40c309a21ba3952d06f346b6411930729c9.1617302792.git.ashish.kalra@amd.com> From: Steve Rutherford Date: Mon, 5 Apr 2021 13:42:42 -0700 Message-ID: Subject: Re: [PATCH v11 08/13] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall To: Ashish Kalra Cc: Paolo Bonzini , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Joerg Roedel , Borislav Petkov , Tom Lendacky , X86 ML , KVM list , LKML , Sean Christopherson , Venu Busireddy , Brijesh Singh , Will Deacon , maz@kernel.org, Quentin Perret Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Apr 5, 2021 at 7:28 AM Ashish Kalra wrote: > > From: Ashish Kalra > > This hypercall is used by the SEV guest to notify a change in the page > encryption status to the hypervisor. The hypercall should be invoked > only when the encryption attribute is changed from encrypted -> decrypted > and vice versa. By default all guest pages are considered encrypted. > > The hypercall exits to userspace to manage the guest shared regions and > integrate with the userspace VMM's migration code. > > The patch integrates and extends DMA_SHARE/UNSHARE hypercall to > userspace exit functionality (arm64-specific) patch from Marc Zyngier, > to avoid arch-specific stuff and have a common interface > from the guest back to the VMM and sharing of the host handling of the > hypercall to support use case for a guest to share memory with a host. > > Cc: Thomas Gleixner > Cc: Ingo Molnar > Cc: "H. Peter Anvin" > Cc: Paolo Bonzini > Cc: Joerg Roedel > Cc: Borislav Petkov > Cc: Tom Lendacky > Cc: x86@kernel.org > Cc: kvm@vger.kernel.org > Cc: linux-kernel@vger.kernel.org > Signed-off-by: Brijesh Singh > Signed-off-by: Ashish Kalra > --- > Documentation/virt/kvm/api.rst | 18 ++++++++ > Documentation/virt/kvm/hypercalls.rst | 15 +++++++ > arch/x86/include/asm/kvm_host.h | 2 + > arch/x86/kvm/svm/sev.c | 61 +++++++++++++++++++++++++++ > arch/x86/kvm/svm/svm.c | 2 + > arch/x86/kvm/svm/svm.h | 2 + > arch/x86/kvm/vmx/vmx.c | 1 + > arch/x86/kvm/x86.c | 12 ++++++ > include/uapi/linux/kvm.h | 8 ++++ > include/uapi/linux/kvm_para.h | 1 + > 10 files changed, 122 insertions(+) > > diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst > index 307f2fcf1b02..52bd7e475fd6 100644 > --- a/Documentation/virt/kvm/api.rst > +++ b/Documentation/virt/kvm/api.rst > @@ -5475,6 +5475,24 @@ Valid values for 'type' are: > Userspace is expected to place the hypercall result into the appropriate > field before invoking KVM_RUN again. > > +:: > + > + /* KVM_EXIT_DMA_SHARE / KVM_EXIT_DMA_UNSHARE */ > + struct { > + __u64 addr; > + __u64 len; > + __u64 ret; > + } dma_sharing; > + > +This defines a common interface from the guest back to the KVM to support > +use case for a guest to share memory with a host. > + > +The addr and len fields define the starting address and length of the > +shared memory region. > + > +Userspace is expected to place the hypercall result into the "ret" field > +before invoking KVM_RUN again. > + > :: > > /* Fix the size of the union. */ > diff --git a/Documentation/virt/kvm/hypercalls.rst b/Documentation/virt/kvm/hypercalls.rst > index ed4fddd364ea..7aff0cebab7c 100644 > --- a/Documentation/virt/kvm/hypercalls.rst > +++ b/Documentation/virt/kvm/hypercalls.rst > @@ -169,3 +169,18 @@ a0: destination APIC ID > > :Usage example: When sending a call-function IPI-many to vCPUs, yield if > any of the IPI target vCPUs was preempted. > + > + > +8. KVM_HC_PAGE_ENC_STATUS > +------------------------- > +:Architecture: x86 > +:Status: active > +:Purpose: Notify the encryption status changes in guest page table (SEV guest) > + > +a0: the guest physical address of the start page > +a1: the number of pages > +a2: encryption attribute > + > + Where: > + * 1: Encryption attribute is set > + * 0: Encryption attribute is cleared > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 3768819693e5..78284ebbbee7 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -1352,6 +1352,8 @@ struct kvm_x86_ops { > int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err); > > void (*vcpu_deliver_sipi_vector)(struct kvm_vcpu *vcpu, u8 vector); > + int (*page_enc_status_hc)(struct kvm_vcpu *vcpu, unsigned long gpa, > + unsigned long sz, unsigned long mode); > }; > > struct kvm_x86_nested_ops { > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > index c9795a22e502..fb3a315e5827 100644 > --- a/arch/x86/kvm/svm/sev.c > +++ b/arch/x86/kvm/svm/sev.c > @@ -1544,6 +1544,67 @@ static int sev_receive_finish(struct kvm *kvm, struct kvm_sev_cmd *argp) > return ret; > } > > +static int sev_complete_userspace_page_enc_status_hc(struct kvm_vcpu *vcpu) > +{ > + vcpu->run->exit_reason = 0; I don't believe you need to clear exit_reason: it's universally set on exit. > + kvm_rax_write(vcpu, vcpu->run->dma_sharing.ret); > + ++vcpu->stat.hypercalls; > + return kvm_skip_emulated_instruction(vcpu); > +} > + > +int svm_page_enc_status_hc(struct kvm_vcpu *vcpu, unsigned long gpa, > + unsigned long npages, unsigned long enc) > +{ > + kvm_pfn_t pfn_start, pfn_end; > + struct kvm *kvm = vcpu->kvm; > + gfn_t gfn_start, gfn_end; > + > + if (!sev_guest(kvm)) > + return -EINVAL; > + > + if (!npages) > + return 0; > + > + gfn_start = gpa_to_gfn(gpa); > + gfn_end = gfn_start + npages; > + > + /* out of bound access error check */ > + if (gfn_end <= gfn_start) > + return -EINVAL; > + > + /* lets make sure that gpa exist in our memslot */ > + pfn_start = gfn_to_pfn(kvm, gfn_start); > + pfn_end = gfn_to_pfn(kvm, gfn_end); > + > + if (is_error_noslot_pfn(pfn_start) && !is_noslot_pfn(pfn_start)) { > + /* > + * Allow guest MMIO range(s) to be added > + * to the shared pages list. > + */ > + return -EINVAL; > + } > + > + if (is_error_noslot_pfn(pfn_end) && !is_noslot_pfn(pfn_end)) { > + /* > + * Allow guest MMIO range(s) to be added > + * to the shared pages list. > + */ > + return -EINVAL; > + } > + > + if (enc) > + vcpu->run->exit_reason = KVM_EXIT_DMA_UNSHARE; > + else > + vcpu->run->exit_reason = KVM_EXIT_DMA_SHARE; > + > + vcpu->run->dma_sharing.addr = gfn_start; > + vcpu->run->dma_sharing.len = npages * PAGE_SIZE; > + vcpu->arch.complete_userspace_io = > + sev_complete_userspace_page_enc_status_hc; > + > + return 0; > +} > + > int svm_mem_enc_op(struct kvm *kvm, void __user *argp) > { > struct kvm_sev_cmd sev_cmd; > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index 58a45bb139f8..3cbf000beff1 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -4620,6 +4620,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { > .complete_emulated_msr = svm_complete_emulated_msr, > > .vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector, > + > + .page_enc_status_hc = svm_page_enc_status_hc, > }; > > static struct kvm_x86_init_ops svm_init_ops __initdata = { > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h > index 39e071fdab0c..9cc16d2c0b8f 100644 > --- a/arch/x86/kvm/svm/svm.h > +++ b/arch/x86/kvm/svm/svm.h > @@ -451,6 +451,8 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, > bool has_error_code, u32 error_code); > int nested_svm_exit_special(struct vcpu_svm *svm); > void sync_nested_vmcb_control(struct vcpu_svm *svm); > +int svm_page_enc_status_hc(struct kvm_vcpu *vcpu, unsigned long gpa, > + unsigned long npages, unsigned long enc); > > extern struct kvm_x86_nested_ops svm_nested_ops; > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 32cf8287d4a7..2c98a5ed554b 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -7748,6 +7748,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { > .can_emulate_instruction = vmx_can_emulate_instruction, > .apic_init_signal_blocked = vmx_apic_init_signal_blocked, > .migrate_timers = vmx_migrate_timers, > + .page_enc_status_hc = NULL, > > .msr_filter_changed = vmx_msr_filter_changed, > .complete_emulated_msr = kvm_complete_insn_gp, > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index f7d12fca397b..ef5c77d59651 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -8273,6 +8273,18 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) > kvm_sched_yield(vcpu->kvm, a0); > ret = 0; > break; > + case KVM_HC_PAGE_ENC_STATUS: { > + int r; > + > + ret = -KVM_ENOSYS; > + if (kvm_x86_ops.page_enc_status_hc) { > + r = kvm_x86_ops.page_enc_status_hc(vcpu, a0, a1, a2); > + if (r >= 0) > + return r; > + ret = r; Style nit: Why not just set ret, and return ret if ret >=0? This looks good. I just had a few nitpicks. Reviewed-by: Steve Rutherford