All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Michael Roth <michael.roth@amd.com>
To: <kvm@vger.kernel.org>
Cc: <linux-coco@lists.linux.dev>, <linux-mm@kvack.org>,
	<linux-crypto@vger.kernel.org>, <x86@kernel.org>,
	<linux-kernel@vger.kernel.org>, <tglx@linutronix.de>,
	<mingo@redhat.com>, <jroedel@suse.de>, <thomas.lendacky@amd.com>,
	<hpa@zytor.com>, <ardb@kernel.org>, <pbonzini@redhat.com>,
	<seanjc@google.com>, <vkuznets@redhat.com>, <jmattson@google.com>,
	<luto@kernel.org>, <dave.hansen@linux.intel.com>,
	<slp@redhat.com>, <pgonda@google.com>, <peterz@infradead.org>,
	<srinivas.pandruvada@linux.intel.com>, <rientjes@google.com>,
	<dovmurik@linux.ibm.com>, <tobin@ibm.com>, <bp@alien8.de>,
	<vbabka@suse.cz>, <kirill@shutemov.name>, <ak@linux.intel.com>,
	<tony.luck@intel.com>,
	<sathyanarayanan.kuppuswamy@linux.intel.com>,
	<alpergun@google.com>, <jarkko@kernel.org>,
	<ashish.kalra@amd.com>, <nikunj.dadhania@amd.com>,
	<pankaj.gupta@amd.com>, <liam.merwick@oracle.com>,
	Brijesh Singh <brijesh.singh@amd.com>
Subject: [PATCH v12 11/29] KVM: SEV: Add KVM_SEV_SNP_LAUNCH_UPDATE command
Date: Fri, 29 Mar 2024 17:58:17 -0500	[thread overview]
Message-ID: <20240329225835.400662-12-michael.roth@amd.com> (raw)
In-Reply-To: <20240329225835.400662-1-michael.roth@amd.com>

From: Brijesh Singh <brijesh.singh@amd.com>

A key aspect of a launching an SNP guest is initializing it with a
known/measured payload which is then encrypted into guest memory as
pre-validated private pages and then measured into the cryptographic
launch context created with KVM_SEV_SNP_LAUNCH_START so that the guest
can attest itself after booting.

Since all private pages are provided by guest_memfd, make use of the
kvm_gmem_populate() interface to handle this. The general flow is that
guest_memfd will handle allocating the pages associated with the GPA
ranges being initialized by each particular call of
KVM_SEV_SNP_LAUNCH_UPDATE, copying data from userspace into those pages,
and then the post_populate callback will do the work of setting the
RMP entries for these pages to private and issuing the SNP firmware
calls to encrypt/measure them.

For more information see the SEV-SNP specification.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Co-developed-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/x86/amd-memory-encryption.rst    |  39 ++++
 arch/x86/include/uapi/asm/kvm.h               |  15 ++
 arch/x86/kvm/svm/sev.c                        | 211 ++++++++++++++++++
 3 files changed, 265 insertions(+)

diff --git a/Documentation/virt/kvm/x86/amd-memory-encryption.rst b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
index a10b817c162d..4268aa5c380e 100644
--- a/Documentation/virt/kvm/x86/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
@@ -478,6 +478,45 @@ Returns: 0 on success, -negative on error
 
 See the SEV-SNP spec [snp-fw-abi]_ for further detail on the launch input.
 
+19. KVM_SEV_SNP_LAUNCH_UPDATE
+-----------------------------
+
+The KVM_SEV_SNP_LAUNCH_UPDATE command is used for loading userspace-provided
+data into a guest GPA range, measuring the contents into the SNP guest context
+created by KVM_SEV_SNP_LAUNCH_START, and then encrypting/validating that GPA
+range so that it will be immediately readable using the encryption key
+associated with the guest context once it is booted, after which point it can
+attest the measurement associated with its context before unlocking any
+secrets.
+
+It is required that the GPA ranges initialized by this command have had the
+KVM_MEMORY_ATTRIBUTE_PRIVATE attribute set in advance. See the documentation
+for KVM_SET_MEMORY_ATTRIBUTES for more details on this aspect.
+
+Parameters (in): struct  kvm_sev_snp_launch_update
+
+Returns: 0 on success, -negative on error
+
+::
+
+        struct kvm_sev_snp_launch_update {
+                __u64 gfn_start;        /* Guest page number to load/encrypt data into. */
+                __u64 uaddr;            /* Userspace address of data to be loaded/encrypted. */
+                __u32 len;              /* 4k-aligned length in bytes to copy into guest memory.*/
+                __u8 type;              /* The type of the guest pages being initialized. */
+        };
+
+where the allowed values for page_type are #define'd as::
+
+	KVM_SEV_SNP_PAGE_TYPE_NORMAL
+	KVM_SEV_SNP_PAGE_TYPE_ZERO
+	KVM_SEV_SNP_PAGE_TYPE_UNMEASURED
+	KVM_SEV_SNP_PAGE_TYPE_SECRETS
+	KVM_SEV_SNP_PAGE_TYPE_CPUID
+
+See the SEV-SNP spec [snp-fw-abi]_ for further details on how each page type is
+used/measured.
+
 Device attribute API
 ====================
 
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 350ddd5264ea..956eb548c08e 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -695,6 +695,7 @@ enum sev_cmd_id {
 
 	/* SNP-specific commands */
 	KVM_SEV_SNP_LAUNCH_START,
+	KVM_SEV_SNP_LAUNCH_UPDATE,
 
 	KVM_SEV_NR_MAX,
 };
@@ -826,6 +827,20 @@ struct kvm_sev_snp_launch_start {
 	__u8 gosvw[16];
 };
 
+/* Kept in sync with firmware values for simplicity. */
+#define KVM_SEV_SNP_PAGE_TYPE_NORMAL		0x1
+#define KVM_SEV_SNP_PAGE_TYPE_ZERO		0x3
+#define KVM_SEV_SNP_PAGE_TYPE_UNMEASURED	0x4
+#define KVM_SEV_SNP_PAGE_TYPE_SECRETS		0x5
+#define KVM_SEV_SNP_PAGE_TYPE_CPUID		0x6
+
+struct kvm_sev_snp_launch_update {
+	__u64 gfn_start;
+	__u64 uaddr;
+	__u32 len;
+	__u8 type;
+};
+
 #define KVM_X2APIC_API_USE_32BIT_IDS            (1ULL << 0)
 #define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK  (1ULL << 1)
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 6c7c77e33e62..a8a8a285b4a4 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -247,6 +247,35 @@ static void sev_decommission(unsigned int handle)
 	sev_guest_decommission(&decommission, NULL);
 }
 
+static int snp_page_reclaim(u64 pfn)
+{
+	struct sev_data_snp_page_reclaim data = {0};
+	int err, rc;
+
+	data.paddr = __sme_set(pfn << PAGE_SHIFT);
+	rc = sev_do_cmd(SEV_CMD_SNP_PAGE_RECLAIM, &data, &err);
+	if (WARN_ON_ONCE(rc)) {
+		/*
+		 * This shouldn't happen under normal circumstances, but if the
+		 * reclaim failed, then the page is no longer safe to use.
+		 */
+		snp_leak_pages(pfn, 1);
+	}
+
+	return rc;
+}
+
+static int host_rmp_make_shared(u64 pfn, enum pg_level level, bool leak)
+{
+	int rc;
+
+	rc = rmp_make_shared(pfn, level);
+	if (rc && leak)
+		snp_leak_pages(pfn, page_level_size(level) >> PAGE_SHIFT);
+
+	return rc;
+}
+
 static void sev_unbind_asid(struct kvm *kvm, unsigned int handle)
 {
 	struct sev_data_deactivate deactivate;
@@ -2075,6 +2104,185 @@ static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return rc;
 }
 
+struct sev_gmem_populate_args {
+	__u8 type;
+	int sev_fd;
+	int fw_error;
+};
+
+static int sev_gmem_post_populate(struct kvm *kvm, struct kvm_memory_slot *slot,
+				  gfn_t gfn_start, kvm_pfn_t pfn, void __user *src,
+				  int order, void *opaque)
+{
+	struct sev_gmem_populate_args *sev_populate_args = opaque;
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	int npages = (1 << order);
+	int n_private = 0;
+	int ret, i;
+	gfn_t gfn;
+
+	pr_debug("%s: gfn_start %llx pfn_start %llx npages %d\n",
+		 __func__, gfn_start, pfn, npages);
+
+	for (gfn = gfn_start, i = 0; gfn < gfn_start + npages; gfn++, i++) {
+		struct sev_data_snp_launch_update fw_args = {0};
+		bool assigned;
+		int level;
+
+		if (!kvm_mem_is_private(kvm, gfn)) {
+			pr_debug("%s: Failed to ensure GFN 0x%llx has private memory attribute set\n",
+				 __func__, gfn);
+			ret = -EINVAL;
+			break;
+		}
+
+		ret = snp_lookup_rmpentry((u64)pfn + i, &assigned, &level);
+		if (ret || assigned) {
+			pr_debug("%s: Failed to ensure GFN 0x%llx RMP entry is initial shared state, ret: %d assigned: %d\n",
+				 __func__, gfn, ret, assigned);
+			break;
+		}
+
+		ret = rmp_make_private(pfn + i, gfn << PAGE_SHIFT, PG_LEVEL_4K,
+				       sev_get_asid(kvm), true);
+		if (ret) {
+			pr_debug("%s: Failed to convert GFN 0x%llx to private, ret: %d\n",
+				 __func__, gfn, ret);
+			break;
+		}
+
+		n_private++;
+
+		fw_args.gctx_paddr = __psp_pa(sev->snp_context);
+		fw_args.address = __sme_set(pfn_to_hpa(pfn + i));
+		fw_args.page_size = PG_LEVEL_TO_RMP(PG_LEVEL_4K);
+		fw_args.page_type = sev_populate_args->type;
+		ret = __sev_issue_cmd(sev_populate_args->sev_fd, SEV_CMD_SNP_LAUNCH_UPDATE,
+				      &fw_args, &sev_populate_args->fw_error);
+		if (ret) {
+			pr_debug("%s: SEV-SNP launch update failed, ret: 0x%x, fw_error: 0x%x\n",
+				 __func__, ret, sev_populate_args->fw_error);
+
+			if (snp_page_reclaim(pfn + i))
+				break;
+
+			/*
+			 * When invalid CPUID function entries are detected,
+			 * firmware writes the expected values into the page and
+			 * leaves it unencrypted so it can be used for debugging
+			 * and error-reporting.
+			 *
+			 * Copy this page back into the source buffer so
+			 * userspace can use this information to provide
+			 * information on which CPUID leaves/fields failed CPUID
+			 * validation.
+			 */
+			if (sev_populate_args->type == KVM_SEV_SNP_PAGE_TYPE_CPUID &&
+			    sev_populate_args->fw_error == SEV_RET_INVALID_PARAM) {
+				void *vaddr;
+
+				host_rmp_make_shared(pfn + i, PG_LEVEL_4K, true);
+				vaddr = kmap_local_pfn(pfn + i);
+
+				if (copy_to_user(src + i * PAGE_SIZE,
+						 vaddr, PAGE_SIZE))
+					pr_debug("Failed to write CPUID page back to userspace\n");
+
+				kunmap_local(vaddr);
+			}
+
+			break;
+		}
+	}
+
+	if (ret) {
+		pr_debug("%s: exiting with error ret %d, undoing %d populated gmem pages.\n",
+			 __func__, ret, n_private);
+		for (i = 0; i < n_private; i++)
+			host_rmp_make_shared(pfn + i, PG_LEVEL_4K, true);
+	}
+
+	return ret;
+}
+
+static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_gmem_populate_args sev_populate_args = {0};
+	struct kvm_gmem_populate_args populate_args = {0};
+	struct kvm_sev_snp_launch_update params;
+	struct kvm_memory_slot *memslot;
+	unsigned int npages;
+	int ret = 0;
+
+	if (!sev_snp_guest(kvm) || !sev->snp_context)
+		return -EINVAL;
+
+	if (copy_from_user(&params, u64_to_user_ptr(argp->data), sizeof(params)))
+		return -EFAULT;
+
+	if (!IS_ALIGNED(params.len, PAGE_SIZE) ||
+	    (params.type != KVM_SEV_SNP_PAGE_TYPE_NORMAL &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_ZERO &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_UNMEASURED &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_SECRETS &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_CPUID))
+		return -EINVAL;
+
+	npages = params.len / PAGE_SIZE;
+
+	pr_debug("%s: GFN range 0x%llx-0x%llx type %d\n", __func__,
+		 params.gfn_start, params.gfn_start + npages, params.type);
+
+	/*
+	 * For each GFN that's being prepared as part of the initial guest
+	 * state, the following pre-conditions are verified:
+	 *
+	 *   1) The backing memslot is a valid private memslot.
+	 *   2) The GFN has been set to private via KVM_SET_MEMORY_ATTRIBUTES
+	 *      beforehand.
+	 *   3) The PFN of the guest_memfd has not already been set to private
+	 *      in the RMP table.
+	 *
+	 * The KVM MMU relies on kvm->mmu_invalidate_seq to retry nested page
+	 * faults if there's a race between a fault and an attribute update via
+	 * KVM_SET_MEMORY_ATTRIBUTES, and a similar approach could be utilized
+	 * here. However, kvm->slots_lock guards against both this as well as
+	 * concurrent memslot updates occurring while these checks are being
+	 * performed, so use that here to make it easier to reason about the
+	 * initial expected state and better guard against unexpected
+	 * situations.
+	 */
+	mutex_lock(&kvm->slots_lock);
+
+	memslot = gfn_to_memslot(kvm, params.gfn_start);
+	if (!kvm_slot_can_be_private(memslot)) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	sev_populate_args.sev_fd = argp->sev_fd;
+	sev_populate_args.type = params.type;
+
+	populate_args.opaque = &sev_populate_args;
+	populate_args.gfn = params.gfn_start;
+	populate_args.src = u64_to_user_ptr(params.uaddr);
+	populate_args.npages = npages;
+	populate_args.do_memcpy = params.type != KVM_SEV_SNP_PAGE_TYPE_ZERO;
+	populate_args.post_populate = sev_gmem_post_populate;
+
+	ret = kvm_gmem_populate(kvm, memslot, &populate_args);
+	if (ret) {
+		argp->error = sev_populate_args.fw_error;
+		pr_debug("%s: kvm_gmem_populate failed, ret %d\n", __func__, ret);
+	}
+
+out:
+	mutex_unlock(&kvm->slots_lock);
+
+	return ret;
+}
+
 int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -2165,6 +2373,9 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_SNP_LAUNCH_START:
 		r = snp_launch_start(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_SNP_LAUNCH_UPDATE:
+		r = snp_launch_update(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
-- 
2.25.1


WARNING: multiple messages have this Message-ID (diff)
From: Michael Roth <michael.roth@amd.com>
To: <kvm@vger.kernel.org>
Cc: <linux-coco@lists.linux.dev>, <linux-mm@kvack.org>,
	<linux-crypto@vger.kernel.org>, <x86@kernel.org>,
	<linux-kernel@vger.kernel.org>, <tglx@linutronix.de>,
	<mingo@redhat.com>, <jroedel@suse.de>, <thomas.lendacky@amd.com>,
	<hpa@zytor.com>, <ardb@kernel.org>, <pbonzini@redhat.com>,
	<seanjc@google.com>, <vkuznets@redhat.com>, <jmattson@google.com>,
	<luto@kernel.org>, <dave.hansen@linux.intel.com>,
	<slp@redhat.com>, <pgonda@google.com>, <peterz@infradead.org>,
	<srinivas.pandruvada@linux.intel.com>, <rientjes@google.com>,
	<dovmurik@linux.ibm.com>, <tobin@ibm.com>, <bp@alien8.de>,
	<vbabka@suse.cz>, <kirill@shutemov.name>, <ak@linux.intel.com>,
	<tony.luck@intel.com>,
	<sathyanarayanan.kuppuswamy@linux.intel.com>,
	<alpergun@google.com>, <jarkko@kernel.org>,
	<ashish.kalra@amd.com>, <nikunj.dadhania@amd.com>,
	<pankaj.gupta@amd.com>, <liam.merwick@oracle.com>,
	Brijesh Singh <brijesh.singh@amd.com>
Subject: [PATCH v12 11/29] KVM: SEV: Add KVM_SEV_SNP_LAUNCH_UPDATE command
Date: Fri, 29 Mar 2024 17:58:17 -0500	[thread overview]
Message-ID: <20240329225835.400662-12-michael.roth@amd.com> (raw)
Message-ID: <20240329225817._xueVjp3NJ4Zp09uLzgd_5Y0328_Wa9Yjvgenc8gwn4@z> (raw)
In-Reply-To: <20240329225835.400662-1-michael.roth@amd.com>

From: Brijesh Singh <brijesh.singh@amd.com>

A key aspect of a launching an SNP guest is initializing it with a
known/measured payload which is then encrypted into guest memory as
pre-validated private pages and then measured into the cryptographic
launch context created with KVM_SEV_SNP_LAUNCH_START so that the guest
can attest itself after booting.

Since all private pages are provided by guest_memfd, make use of the
kvm_gmem_populate() interface to handle this. The general flow is that
guest_memfd will handle allocating the pages associated with the GPA
ranges being initialized by each particular call of
KVM_SEV_SNP_LAUNCH_UPDATE, copying data from userspace into those pages,
and then the post_populate callback will do the work of setting the
RMP entries for these pages to private and issuing the SNP firmware
calls to encrypt/measure them.

For more information see the SEV-SNP specification.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Co-developed-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/x86/amd-memory-encryption.rst    |  39 ++++
 arch/x86/include/uapi/asm/kvm.h               |  15 ++
 arch/x86/kvm/svm/sev.c                        | 211 ++++++++++++++++++
 3 files changed, 265 insertions(+)

diff --git a/Documentation/virt/kvm/x86/amd-memory-encryption.rst b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
index a10b817c162d..4268aa5c380e 100644
--- a/Documentation/virt/kvm/x86/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
@@ -478,6 +478,45 @@ Returns: 0 on success, -negative on error
 
 See the SEV-SNP spec [snp-fw-abi]_ for further detail on the launch input.
 
+19. KVM_SEV_SNP_LAUNCH_UPDATE
+-----------------------------
+
+The KVM_SEV_SNP_LAUNCH_UPDATE command is used for loading userspace-provided
+data into a guest GPA range, measuring the contents into the SNP guest context
+created by KVM_SEV_SNP_LAUNCH_START, and then encrypting/validating that GPA
+range so that it will be immediately readable using the encryption key
+associated with the guest context once it is booted, after which point it can
+attest the measurement associated with its context before unlocking any
+secrets.
+
+It is required that the GPA ranges initialized by this command have had the
+KVM_MEMORY_ATTRIBUTE_PRIVATE attribute set in advance. See the documentation
+for KVM_SET_MEMORY_ATTRIBUTES for more details on this aspect.
+
+Parameters (in): struct  kvm_sev_snp_launch_update
+
+Returns: 0 on success, -negative on error
+
+::
+
+        struct kvm_sev_snp_launch_update {
+                __u64 gfn_start;        /* Guest page number to load/encrypt data into. */
+                __u64 uaddr;            /* Userspace address of data to be loaded/encrypted. */
+                __u32 len;              /* 4k-aligned length in bytes to copy into guest memory.*/
+                __u8 type;              /* The type of the guest pages being initialized. */
+        };
+
+where the allowed values for page_type are #define'd as::
+
+	KVM_SEV_SNP_PAGE_TYPE_NORMAL
+	KVM_SEV_SNP_PAGE_TYPE_ZERO
+	KVM_SEV_SNP_PAGE_TYPE_UNMEASURED
+	KVM_SEV_SNP_PAGE_TYPE_SECRETS
+	KVM_SEV_SNP_PAGE_TYPE_CPUID
+
+See the SEV-SNP spec [snp-fw-abi]_ for further details on how each page type is
+used/measured.
+
 Device attribute API
 ====================
 
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 350ddd5264ea..956eb548c08e 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -695,6 +695,7 @@ enum sev_cmd_id {
 
 	/* SNP-specific commands */
 	KVM_SEV_SNP_LAUNCH_START,
+	KVM_SEV_SNP_LAUNCH_UPDATE,
 
 	KVM_SEV_NR_MAX,
 };
@@ -826,6 +827,20 @@ struct kvm_sev_snp_launch_start {
 	__u8 gosvw[16];
 };
 
+/* Kept in sync with firmware values for simplicity. */
+#define KVM_SEV_SNP_PAGE_TYPE_NORMAL		0x1
+#define KVM_SEV_SNP_PAGE_TYPE_ZERO		0x3
+#define KVM_SEV_SNP_PAGE_TYPE_UNMEASURED	0x4
+#define KVM_SEV_SNP_PAGE_TYPE_SECRETS		0x5
+#define KVM_SEV_SNP_PAGE_TYPE_CPUID		0x6
+
+struct kvm_sev_snp_launch_update {
+	__u64 gfn_start;
+	__u64 uaddr;
+	__u32 len;
+	__u8 type;
+};
+
 #define KVM_X2APIC_API_USE_32BIT_IDS            (1ULL << 0)
 #define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK  (1ULL << 1)
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 6c7c77e33e62..a8a8a285b4a4 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -247,6 +247,35 @@ static void sev_decommission(unsigned int handle)
 	sev_guest_decommission(&decommission, NULL);
 }
 
+static int snp_page_reclaim(u64 pfn)
+{
+	struct sev_data_snp_page_reclaim data = {0};
+	int err, rc;
+
+	data.paddr = __sme_set(pfn << PAGE_SHIFT);
+	rc = sev_do_cmd(SEV_CMD_SNP_PAGE_RECLAIM, &data, &err);
+	if (WARN_ON_ONCE(rc)) {
+		/*
+		 * This shouldn't happen under normal circumstances, but if the
+		 * reclaim failed, then the page is no longer safe to use.
+		 */
+		snp_leak_pages(pfn, 1);
+	}
+
+	return rc;
+}
+
+static int host_rmp_make_shared(u64 pfn, enum pg_level level, bool leak)
+{
+	int rc;
+
+	rc = rmp_make_shared(pfn, level);
+	if (rc && leak)
+		snp_leak_pages(pfn, page_level_size(level) >> PAGE_SHIFT);
+
+	return rc;
+}
+
 static void sev_unbind_asid(struct kvm *kvm, unsigned int handle)
 {
 	struct sev_data_deactivate deactivate;
@@ -2075,6 +2104,185 @@ static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return rc;
 }
 
+struct sev_gmem_populate_args {
+	__u8 type;
+	int sev_fd;
+	int fw_error;
+};
+
+static int sev_gmem_post_populate(struct kvm *kvm, struct kvm_memory_slot *slot,
+				  gfn_t gfn_start, kvm_pfn_t pfn, void __user *src,
+				  int order, void *opaque)
+{
+	struct sev_gmem_populate_args *sev_populate_args = opaque;
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	int npages = (1 << order);
+	int n_private = 0;
+	int ret, i;
+	gfn_t gfn;
+
+	pr_debug("%s: gfn_start %llx pfn_start %llx npages %d\n",
+		 __func__, gfn_start, pfn, npages);
+
+	for (gfn = gfn_start, i = 0; gfn < gfn_start + npages; gfn++, i++) {
+		struct sev_data_snp_launch_update fw_args = {0};
+		bool assigned;
+		int level;
+
+		if (!kvm_mem_is_private(kvm, gfn)) {
+			pr_debug("%s: Failed to ensure GFN 0x%llx has private memory attribute set\n",
+				 __func__, gfn);
+			ret = -EINVAL;
+			break;
+		}
+
+		ret = snp_lookup_rmpentry((u64)pfn + i, &assigned, &level);
+		if (ret || assigned) {
+			pr_debug("%s: Failed to ensure GFN 0x%llx RMP entry is initial shared state, ret: %d assigned: %d\n",
+				 __func__, gfn, ret, assigned);
+			break;
+		}
+
+		ret = rmp_make_private(pfn + i, gfn << PAGE_SHIFT, PG_LEVEL_4K,
+				       sev_get_asid(kvm), true);
+		if (ret) {
+			pr_debug("%s: Failed to convert GFN 0x%llx to private, ret: %d\n",
+				 __func__, gfn, ret);
+			break;
+		}
+
+		n_private++;
+
+		fw_args.gctx_paddr = __psp_pa(sev->snp_context);
+		fw_args.address = __sme_set(pfn_to_hpa(pfn + i));
+		fw_args.page_size = PG_LEVEL_TO_RMP(PG_LEVEL_4K);
+		fw_args.page_type = sev_populate_args->type;
+		ret = __sev_issue_cmd(sev_populate_args->sev_fd, SEV_CMD_SNP_LAUNCH_UPDATE,
+				      &fw_args, &sev_populate_args->fw_error);
+		if (ret) {
+			pr_debug("%s: SEV-SNP launch update failed, ret: 0x%x, fw_error: 0x%x\n",
+				 __func__, ret, sev_populate_args->fw_error);
+
+			if (snp_page_reclaim(pfn + i))
+				break;
+
+			/*
+			 * When invalid CPUID function entries are detected,
+			 * firmware writes the expected values into the page and
+			 * leaves it unencrypted so it can be used for debugging
+			 * and error-reporting.
+			 *
+			 * Copy this page back into the source buffer so
+			 * userspace can use this information to provide
+			 * information on which CPUID leaves/fields failed CPUID
+			 * validation.
+			 */
+			if (sev_populate_args->type == KVM_SEV_SNP_PAGE_TYPE_CPUID &&
+			    sev_populate_args->fw_error == SEV_RET_INVALID_PARAM) {
+				void *vaddr;
+
+				host_rmp_make_shared(pfn + i, PG_LEVEL_4K, true);
+				vaddr = kmap_local_pfn(pfn + i);
+
+				if (copy_to_user(src + i * PAGE_SIZE,
+						 vaddr, PAGE_SIZE))
+					pr_debug("Failed to write CPUID page back to userspace\n");
+
+				kunmap_local(vaddr);
+			}
+
+			break;
+		}
+	}
+
+	if (ret) {
+		pr_debug("%s: exiting with error ret %d, undoing %d populated gmem pages.\n",
+			 __func__, ret, n_private);
+		for (i = 0; i < n_private; i++)
+			host_rmp_make_shared(pfn + i, PG_LEVEL_4K, true);
+	}
+
+	return ret;
+}
+
+static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_gmem_populate_args sev_populate_args = {0};
+	struct kvm_gmem_populate_args populate_args = {0};
+	struct kvm_sev_snp_launch_update params;
+	struct kvm_memory_slot *memslot;
+	unsigned int npages;
+	int ret = 0;
+
+	if (!sev_snp_guest(kvm) || !sev->snp_context)
+		return -EINVAL;
+
+	if (copy_from_user(&params, u64_to_user_ptr(argp->data), sizeof(params)))
+		return -EFAULT;
+
+	if (!IS_ALIGNED(params.len, PAGE_SIZE) ||
+	    (params.type != KVM_SEV_SNP_PAGE_TYPE_NORMAL &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_ZERO &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_UNMEASURED &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_SECRETS &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_CPUID))
+		return -EINVAL;
+
+	npages = params.len / PAGE_SIZE;
+
+	pr_debug("%s: GFN range 0x%llx-0x%llx type %d\n", __func__,
+		 params.gfn_start, params.gfn_start + npages, params.type);
+
+	/*
+	 * For each GFN that's being prepared as part of the initial guest
+	 * state, the following pre-conditions are verified:
+	 *
+	 *   1) The backing memslot is a valid private memslot.
+	 *   2) The GFN has been set to private via KVM_SET_MEMORY_ATTRIBUTES
+	 *      beforehand.
+	 *   3) The PFN of the guest_memfd has not already been set to private
+	 *      in the RMP table.
+	 *
+	 * The KVM MMU relies on kvm->mmu_invalidate_seq to retry nested page
+	 * faults if there's a race between a fault and an attribute update via
+	 * KVM_SET_MEMORY_ATTRIBUTES, and a similar approach could be utilized
+	 * here. However, kvm->slots_lock guards against both this as well as
+	 * concurrent memslot updates occurring while these checks are being
+	 * performed, so use that here to make it easier to reason about the
+	 * initial expected state and better guard against unexpected
+	 * situations.
+	 */
+	mutex_lock(&kvm->slots_lock);
+
+	memslot = gfn_to_memslot(kvm, params.gfn_start);
+	if (!kvm_slot_can_be_private(memslot)) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	sev_populate_args.sev_fd = argp->sev_fd;
+	sev_populate_args.type = params.type;
+
+	populate_args.opaque = &sev_populate_args;
+	populate_args.gfn = params.gfn_start;
+	populate_args.src = u64_to_user_ptr(params.uaddr);
+	populate_args.npages = npages;
+	populate_args.do_memcpy = params.type != KVM_SEV_SNP_PAGE_TYPE_ZERO;
+	populate_args.post_populate = sev_gmem_post_populate;
+
+	ret = kvm_gmem_populate(kvm, memslot, &populate_args);
+	if (ret) {
+		argp->error = sev_populate_args.fw_error;
+		pr_debug("%s: kvm_gmem_populate failed, ret %d\n", __func__, ret);
+	}
+
+out:
+	mutex_unlock(&kvm->slots_lock);
+
+	return ret;
+}
+
 int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -2165,6 +2373,9 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_SNP_LAUNCH_START:
 		r = snp_launch_start(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_SNP_LAUNCH_UPDATE:
+		r = snp_launch_update(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
-- 
2.25.1


X-sender: <linux-kernel+bounces-125490-steffen.klassert=secunet.com@vger.kernel.org>
X-Receiver: <steffen.klassert@secunet.com> ORCPT=rfc822;steffen.klassert@secunet.com; X-ExtendedProps=DwA1AAAATWljcm9zb2Z0LkV4Y2hhbmdlLlRyYW5zcG9ydC5EaXJlY3RvcnlEYXRhLklzUmVzb3VyY2UCAAAFABUAFgACAAAABQAUABEA8MUJLbkECUOS0gjaDTZ+uAUAagAJAAEAAAAAAAAABQAWAAIAAAUAQwACAAAFAEYABwADAAAABQBHAAIAAAUAEgAPAGIAAAAvbz1zZWN1bmV0L291PUV4Y2hhbmdlIEFkbWluaXN0cmF0aXZlIEdyb3VwIChGWURJQk9IRjIzU1BETFQpL2NuPVJlY2lwaWVudHMvY249U3RlZmZlbiBLbGFzc2VydDY4YwUACwAXAL4AAACheZxkHSGBRqAcAp3ukbifQ049REI2LENOPURhdGFiYXNlcyxDTj1FeGNoYW5nZSBBZG1pbmlzdHJhdGl2ZSBHcm91cCAoRllESUJPSEYyM1NQRExUKSxDTj1BZG1pbmlzdHJhdGl2ZSBHcm91cHMsQ049c2VjdW5ldCxDTj1NaWNyb3NvZnQgRXhjaGFuZ2UsQ049U2VydmljZXMsQ049Q29uZmlndXJhdGlvbixEQz1zZWN1bmV0LERDPWRlBQAOABEABiAS9uuMOkqzwmEZDvWNNQUAHQAPAAwAAABtYngtZXNzZW4tMDIFADwAAgAADwA2AAAATWljcm9zb2Z0LkV4Y2hhbmdlLlRyYW5zcG9ydC5NYWlsUmVjaXBpZW50LkRpc3BsYXlOYW1lDwARAAAAS2xhc3NlcnQsIFN0ZWZmZW4FAGwAAgAABQBYABcASgAAAPDFCS25BAlDktII2g02frhDTj1LbGFzc2VydCBTdGVmZmVuLE9VPVVzZXJzLE9VPU1pZ3JhdGlvbixEQz1zZWN1bmV0LERDPWRlBQAMAAIAAAUAJgACAAEFACIADwAxAAAAQXV0b1Jlc3BvbnNlU3VwcHJlc3M6IDANClRyYW5zbWl0SGlzdG9yeTogRmFsc2UNCg8ALwAAAE1pY3Jvc29mdC5FeGNoYW5nZS5UcmFuc3BvcnQuRXhwYW5zaW9uR3JvdXBUeXBlDwAVAAAATWVtYmVyc0dyb3VwRXhwYW5zaW9uBQAjAAIAAQ==
X-CreatedBy: MSExchange15
X-HeloDomain: a.mx.secunet.com
X-ExtendedProps: BQBjAAoAs0mmlidQ3AgFAGEACAABAAAABQA3AAIAAA8APAAAAE1pY3Jvc29mdC5FeGNoYW5nZS5UcmFuc3BvcnQuTWFpbFJlY2lwaWVudC5Pcmdhbml6YXRpb25TY29wZREAAAAAAAAAAAAAAAAAAAAAAAUASQACAAEFAAQAFCABAAAAHAAAAHN0ZWZmZW4ua2xhc3NlcnRAc2VjdW5ldC5jb20FAAYAAgABDwAqAAAATWljcm9zb2Z0LkV4Y2hhbmdlLlRyYW5zcG9ydC5SZXN1Ym1pdENvdW50BwACAAAADwAJAAAAQ0lBdWRpdGVkAgABBQACAAcAAQAAAAUAAwAHAAAAAAAFAAUAAgABBQBiAAoAEAAAAM6KAAAFAGQADwADAAAASHViBQApAAIAAQ8APwAAAE1pY3Jvc29mdC5FeGNoYW5nZS5UcmFuc3BvcnQuRGlyZWN0b3J5RGF0YS5NYWlsRGVsaXZlcnlQcmlvcml0eQ8AAwAAAExvdw==
X-Source: SMTP:Default MBX-ESSEN-02
X-SourceIPAddress: 62.96.220.36
X-EndOfInjectedXHeaders: 35971
Received: from cas-essen-02.secunet.de (10.53.40.202) by
 mbx-essen-02.secunet.de (10.53.40.198) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.37; Sat, 30 Mar 2024 00:00:51 +0100
Received: from a.mx.secunet.com (62.96.220.36) by cas-essen-02.secunet.de
 (10.53.40.202) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35 via Frontend
 Transport; Sat, 30 Mar 2024 00:00:51 +0100
Received: from localhost (localhost [127.0.0.1])
	by a.mx.secunet.com (Postfix) with ESMTP id C46B0208AC
	for <steffen.klassert@secunet.com>; Sat, 30 Mar 2024 00:00:51 +0100 (CET)
X-Virus-Scanned: by secunet
X-Spam-Flag: NO
X-Spam-Score: -2.85
X-Spam-Level:
X-Spam-Status: No, score=-2.85 tagged_above=-999 required=2.1
	tests=[BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.099, DKIM_SIGNED=0.1,
	DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1,
	HEADER_FROM_DIFFERENT_DOMAINS=0.249, MAILING_LIST_MULTI=-1,
	RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001]
	autolearn=unavailable autolearn_force=no
Authentication-Results: a.mx.secunet.com (amavisd-new);
	dkim=pass (1024-bit key) header.d=amd.com
Received: from a.mx.secunet.com ([127.0.0.1])
	by localhost (a.mx.secunet.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id Mq7LLScd9YAI for <steffen.klassert@secunet.com>;
	Sat, 30 Mar 2024 00:00:50 +0100 (CET)
Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=147.75.80.249; helo=am.mirrors.kernel.org; envelope-from=linux-kernel+bounces-125490-steffen.klassert=secunet.com@vger.kernel.org; receiver=steffen.klassert@secunet.com 
DKIM-Filter: OpenDKIM Filter v2.11.0 a.mx.secunet.com D255E2087B
Received: from am.mirrors.kernel.org (am.mirrors.kernel.org [147.75.80.249])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by a.mx.secunet.com (Postfix) with ESMTPS id D255E2087B
	for <steffen.klassert@secunet.com>; Sat, 30 Mar 2024 00:00:50 +0100 (CET)
Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by am.mirrors.kernel.org (Postfix) with ESMTPS id 594661F25574
	for <steffen.klassert@secunet.com>; Fri, 29 Mar 2024 23:00:50 +0000 (UTC)
Received: from localhost.localdomain (localhost.localdomain [127.0.0.1])
	by smtp.subspace.kernel.org (Postfix) with ESMTP id F097113EFED;
	Fri, 29 Mar 2024 23:00:19 +0000 (UTC)
Authentication-Results: smtp.subspace.kernel.org;
	dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="nc26yfw6"
Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2041.outbound.protection.outlook.com [40.107.244.41])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A03713CF91;
	Fri, 29 Mar 2024 23:00:13 +0000 (UTC)
Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.41
ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116;
	t=1711753215; cv=fail; b=fjdA/KPuC/sIeqaVY0oi6TFRyIES80fOAdBPMtaMQ9A82xaK722jPMpDP5uRpw1TDRnbpyFznjP9/Wx21YefB827hdm3kEvIx74zjcXiunSTLqHcgzJwIztYjf1ofsZc2kKi6AWLKfuspBDhUx8scQLyGT8+MjjyUfS7WXaUfwc=
ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org;
	s=arc-20240116; t=1711753215; c=relaxed/simple;
	bh=GxarBB3QQXDtAmxKX8+rgDQfQVE3hghOjKcRWraa+k4=;
	h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References:
	 MIME-Version:Content-Type; b=HsE8q9F6LYfYnMlfLKuLnv9O+oEuUbw9RNotxN5x8lROSKV36F8erowkx3T8A7TuDXzr6O+kU4CrCEBqJ710cdP0htYrMyVI1mRWa6lOwWkBhSGSmyBwm2ctHe9IpUAvbJoSHIn4mjehfry30ZOKzrAsZESfAH+1dlC89lUeS94=
ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=nc26yfw6; arc=fail smtp.client-ip=40.107.244.41
Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com
Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BElzFv0DBW0MUfZQD8UJs5q9+EA6fwy/roXbmEq+TK27HwkmLuLGaSnZEaC4z7LizwxFP+9CcgtZSYe3Ii/x3Qmyx2+qpZC8UzOKYuThs/JgABKZIcsDlHgXuf91vyYHD+eNeDFavLFzfMdZo2aHfXt6nbKGXqbANG1fHpmqqa/XuV/gj8KYH5rwG+G2KsejSM58/o+SoRJo4tf0r7lMElBZNkVB7ERvDWxQuuE+2+oUQLMCIXrnckx38ToRkbf0LSv3pwmBSoITpf9FxRved2imYa055K8dViM8qFqfybVrwd9UIQYfHaZdKZ1RO+Q8fGV/oNpLYiqpqYBgwaOOtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RRM0X1XPmrPHJ/xejsVZHGx/fhXzSZlUOWEIzd5vxwg=;
 b=napnASBwyJG2prTZxam5z33xxl91ON59wzWY5AlObaNMOHV0cnd5AXmXzslMwl1QfB6IKjCNbGRvKeaSztlCXq1EJmJRKNxP/QGz33VOQxT6Ba77MSmZ9Gvharo7064GKpA8UYIMK8cOKZHtCpeZ7KJoUep/ZNRgl6SMVToBBAzcZaOe+6QjxVtGP0/o8HuSAW/+wy96FFUxfFexc0D6205fpRzXzn5uVuIzoZMwGPSHuh88nLjcONk5HXY29Ev4ytXUbmwKtvwDnx/Q0QAqKpocUh5NtX8894m8J5EScyJE5OVIw+bypKeljQQaBmiv3JPmTl+HzucpKl9Fw5SmNw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RRM0X1XPmrPHJ/xejsVZHGx/fhXzSZlUOWEIzd5vxwg=;
 b=nc26yfw6BEddsMoLxmBknQWv7NoJVdW5TsUPbDB5WD56lIuKOC7ktJ7shNNMaS3nlkqYjDppMlO0nis5MV9UNDgd2MVhA+tVr1V4K8Zjyb8ngbfZX76ombAOPObrwYT1QJa3l86MicBWrhhigVCjwXVrKkwqgm6WOguSMG1TIU0=
Received: from BYAPR02CA0060.namprd02.prod.outlook.com (2603:10b6:a03:54::37)
 by DM6PR12MB4106.namprd12.prod.outlook.com (2603:10b6:5:221::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.41; Fri, 29 Mar
 2024 23:00:10 +0000
Received: from SJ1PEPF00001CDC.namprd05.prod.outlook.com
 (2603:10b6:a03:54:cafe::66) by BYAPR02CA0060.outlook.office365.com
 (2603:10b6:a03:54::37) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.40 via Frontend
 Transport; Fri, 29 Mar 2024 23:00:10 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 SJ1PEPF00001CDC.mail.protection.outlook.com (10.167.242.4) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7409.10 via Frontend Transport; Fri, 29 Mar 2024 23:00:10 +0000
Received: from localhost (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 29 Mar
 2024 18:00:03 -0500
From: Michael Roth <michael.roth@amd.com>
To: <kvm@vger.kernel.org>
CC: <linux-coco@lists.linux.dev>, <linux-mm@kvack.org>,
	<linux-crypto@vger.kernel.org>, <x86@kernel.org>,
	<linux-kernel@vger.kernel.org>, <tglx@linutronix.de>, <mingo@redhat.com>,
	<jroedel@suse.de>, <thomas.lendacky@amd.com>, <hpa@zytor.com>,
	<ardb@kernel.org>, <pbonzini@redhat.com>, <seanjc@google.com>,
	<vkuznets@redhat.com>, <jmattson@google.com>, <luto@kernel.org>,
	<dave.hansen@linux.intel.com>, <slp@redhat.com>, <pgonda@google.com>,
	<peterz@infradead.org>, <srinivas.pandruvada@linux.intel.com>,
	<rientjes@google.com>, <dovmurik@linux.ibm.com>, <tobin@ibm.com>,
	<bp@alien8.de>, <vbabka@suse.cz>, <kirill@shutemov.name>,
	<ak@linux.intel.com>, <tony.luck@intel.com>,
	<sathyanarayanan.kuppuswamy@linux.intel.com>, <alpergun@google.com>,
	<jarkko@kernel.org>, <ashish.kalra@amd.com>, <nikunj.dadhania@amd.com>,
	<pankaj.gupta@amd.com>, <liam.merwick@oracle.com>, Brijesh Singh
	<brijesh.singh@amd.com>
Subject: [PATCH v12 11/29] KVM: SEV: Add KVM_SEV_SNP_LAUNCH_UPDATE command
Date: Fri, 29 Mar 2024 17:58:17 -0500
Message-ID: <20240329225835.400662-12-michael.roth@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240329225835.400662-1-michael.roth@amd.com>
References: <20240329225835.400662-1-michael.roth@amd.com>
Precedence: bulk
X-Mailing-List: linux-kernel@vger.kernel.org
List-Id: <linux-kernel.vger.kernel.org>
List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org>
List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ1PEPF00001CDC:EE_|DM6PR12MB4106:EE_
X-MS-Office365-Filtering-Correlation-Id: c27c7199-0608-4e58-824a-08dc5043fc68
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3CPhwmHFRTbcoEsG1eJurOstXAgzf72Ze7qRd5sd1y6H95KtLVWAy00ly7s0y7UOv9/Fa0d8LCHeobZEAV2wMHMBwvDPt5XAgtaP9QWZ+6Pr3bZziTFcZgcMbr04m8pJ+cA9GwKvf6S/OMzrLt1uriEl9sVG2Bf4VpxzIMljD40wtDfDmpx31u8s23d/bIMGv6jQFVj3JsuuUr7+HOjkJs/H2mEbB7cC3VTL0Er3ZMCNT6/L4bqA37k9zeBppe0b5nePPH2UflxHzfh4xMNZ6ttcvsy6mHWLMBExEFEIQhgQ1TOSYMs8niYK+J7Io8C7NWgJtTPqSs9KwMJTb6+9bCVfzJj/ZLfHIJofmlifP5hfGvMx93ymUl3BiC9gNvHQiHNEoEJzJao3IpwR3tvvhnaU3WR7e+uryr7q7iOu9+JcYiXmTkCvbv2nJKeo8lbfwacgdTl6AOMDpdHAitMLR2yKnrPiwH9iJqUttjiJRr0tl6Nw0MUpWYJdQmvO2WKorsDgqWLh1Kh7yCTmsLHo3/F5EQ95LrJUpo/oDYlT4Y8rCFA8M/0cQ/BoREHUAEIJf31yyt0jDtOkCL3L4IuUvAGqIRGTU2nEaPf0gSnJ2SZa7+lm1s/2tb6l75eI+PLBLm7LdN75fg4zFqX9400AtenkM7z03+WcUgV0uQT6QARZn2e8eElJMYIpPr3dLb7aqYFKckjQdmJgdLdQvqbbyTfg7fdW4eBNPlSyVZ81qEcbrDLftDw463QUQ57VRGBO
X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(82310400014)(1800799015)(7416005)(376005)(36860700004);DIR:OUT;SFP:1101;
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Mar 2024 23:00:10.6256
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c27c7199-0608-4e58-824a-08dc5043fc68
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource: SJ1PEPF00001CDC.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4106
Return-Path: linux-kernel+bounces-125490-steffen.klassert=secunet.com@vger.kernel.org
X-MS-Exchange-Organization-OriginalArrivalTime: 29 Mar 2024 23:00:51.8224
 (UTC)
X-MS-Exchange-Organization-Network-Message-Id: 260b5ce3-ecf7-489e-dffa-08dc504414e3
X-MS-Exchange-Organization-OriginalClientIPAddress: 62.96.220.36
X-MS-Exchange-Organization-OriginalServerIPAddress: 10.53.40.202
X-MS-Exchange-Organization-Cross-Premises-Headers-Processed: cas-essen-02.secunet.de
X-MS-Exchange-Organization-OrderedPrecisionLatencyInProgress: LSRV=mbx-essen-02.secunet.de:TOTAL-HUB=33493.502|SMR=0.324(SMRDE=0.003|SMRC=0.320(SMRCL=0.098|X-SMRCR=0.320))|CAT=0.061(CATRESL=0.023
 (CATRESLP2R=0.018)|CATORES=0.035(CATRS=0.035(CATRS-Index Routing
 Agent=0.034)))|QDM=10812.197
 |SMSC=0.011|SMS=0.413(SMSMBXD-INC=0.407)|UNK=0.001|QDM=20522.834|SMSC=0.219|SMS=5.705
 (SMSMBXD-INC=5.210)|QDM=2146.601|PSC=0.016|CAT=0.008(CATRESL=0.007(CATRESLP2R=0.004
 ))|QDM=5.359|CAT=0.006(CATRESL=0.005(CATRESLP2R=0.002));2024-03-30T08:19:05.380Z
X-MS-Exchange-Forest-ArrivalHubServer: mbx-essen-02.secunet.de
X-MS-Exchange-Organization-AuthSource: cas-essen-02.secunet.de
X-MS-Exchange-Organization-AuthAs: Anonymous
X-MS-Exchange-Organization-FromEntityHeader: Internet
X-MS-Exchange-Organization-OriginalSize: 23885
X-MS-Exchange-Organization-HygienePolicy: Standard
X-MS-Exchange-Organization-MessageLatency: SRV=cas-essen-02.secunet.de:TOTAL-FE=0.054|SMR=0.022(SMRPI=0.020(SMRPI-FrontendProxyAgent=0.020))|SMS=0.032
X-MS-Exchange-Organization-Recipient-Limit-Verified: True
X-MS-Exchange-Organization-TotalRecipientCount: 1
X-MS-Exchange-Organization-Rules-Execution-History: 0b0cf904-14ac-4724-8bdf-482ee6223cf2%%%fd34672d-751c-45ae-a963-ed177fcabe23%%%d8080257-b0c3-47b4-b0db-23bc0c8ddb3c%%%95e591a2-5d7d-4afa-b1d0-7573d6c0a5d9%%%f7d0f6bc-4dcc-4876-8c5d-b3d6ddbb3d55%%%16355082-c50b-4214-9c7d-d39575f9f79b
X-MS-Exchange-Forest-RulesExecuted: mbx-essen-02
X-MS-Exchange-Organization-RulesExecuted: mbx-essen-02
X-MS-Exchange-Forest-IndexAgent-0: AQ0CZW4AAXIZAAAPAAADH4sIAAAAAAAEAK07CVcbR5otdIEMPmPHjj
 NJxZsQAZK4MbFjb7BNEl6M4wWcuXZev5bUgl4ktaZbArOT/Nn9Jfsd
 VdXVhwTxROMRreqq7z6rKv/X/z7we0/Ei8D7Hzc8EYde//hEfNvkn4
 0Qf37n9NqNlt97PleZq+yIU/dCOOHAbQ2F3xGO6DqjfusEJgqnLw7f
 vBXHIzccCi8UXt8bek7X+1986Q3FuTc8Ec5c5bTvn/eXe64TjgK3LQ
 bORdd32uL8xGud4LrhidsXbr8VXAyG8N7rD30JtOf2/ADRz1UGgVs/
 A+BtB+cMAu8MHgDWsRsCIW0GonEQDBgSBNQ/DpwBYJurMPGi5feH7v
 shvHUJHFH60y/79uHuLzbwZL/eeffm5Y/24dHOwZEIEZQzJHhE11yl
 Bbw7wyExPgzdLgimM3QD0fT9IXDfQNGBbFuucLrdJLUBPAX+mdcGzM
 0LBmkDq512TfScU1eMQhdlDfhAeGc9+xhe2gN/MOoClOoCcucGHQeg
 A5cnwH0Xnk68sCGOkES37wZOV3S6/jmL1wGKDSzALhAl1wF9fstBoo
 k/SWIY+i0vEg2++eHtzlwlcPr4vumSipW+mQ/XAckOnGDotYDQQLSQ
 db8zV8kQ7Lu3r3aOdmugiMEFggKtOqIDpom8B+EAeZM69ENJVW2uoh
 VNpPrAkJIKYWs6rVNmrs3aP/eDU5Rk6A4Vh3OVg/23YG3DwANGOn6A
 gwoFylMpC3F5YThSkkFT73hB7xz0hwbQ7dJ0abfKvHFqj9T/PYAG60
 U+AEsPROz3gRCXge3+UkeA6Fdex2vRW2k1x323Xfc7nXrz4sqO+tKv
 t90zt+sPYC2u2wffctyuOPBBfd/2+FcjgF/GqgSuD1mzE4Ldge843c
 AR3zr0q3GKv4w19Xp9riIajcbymRcMl8Gkl99vby3DhDp7eF1KEYUQ
 gE/B51ch1r8RS/CBpU7QOqEl4FLdUdtdHjkDb9kJewircSLiH1i6ui
 niCxFniP93zxotMebzq1hbXSWc8Q8AWgfdd8FAQCbgAeCoa1uboFkw
 ViQ6rC4toPLaXqcj6vVjCH7O8iu/NeqBoZFur8p584OWzVW8ftt9L5
 zVleb26uPW6tZau9HYWNvadpzN1vr2iitWV1a2NjZIFx9I21wFRPHB
 BH73nahvPN6ubYkl/LOxKWDkwB2Ogn74RKwIdI5Rq+WGYU3U++4xQD
 9zcdQNAj9A4YrDDOcRfw/7g3rnvO40vX/Y5M+dUQCzAtF2h47XRRC4
 SEZ+rz8YDcnRlla/aWTFfA5N8L4+6QPv4R+G27EgILj1ehxFMKq1iT
 jMfBhRdJirq1QA4CgKUthzZAKEoCso5tZkblPRiDJYfxhGmS5KxTK7
 AUCV3yA6j8tutSh7KpX1j5dlpmVszpCD/xJRotMhZXiItU0Icb2e28
 Z80b0QgLLtNLuYxRSxkS1gNQGAsvJLjHbQGmYAKiswp6LHcYblqmHg
 A9/4HhIxwuNUjGBkHEYDTaUxSNUaQdPtYHAe9SH9nXI1g5SFLshsGD
 ZYv3tEQOD+c+RhVaGrAK2WMJkEMQ1rxZ84YMInTpvzzhKqYH93/+eD
 v9o7R0cHey/eHe3abw/2fkFjARYCrzmCxAPJCqAKp33mgBAa2uzbpt
 8BODQn1upRCuwhGRulH3aDkP3AC2UtJxl86wROzwW5hqLq9ReeiHAY
 jKDSE1h2QKy0wbtsdh17NMDai9f9DsfF6U+e8F8VaCWWsUjEv4zJ6m
 Pbo60Ncdzp2+EQqoynanx5UfxAtoMpXPRHvSaYCTgFutqyND2hXash
 FpfHAh857Xbw1BwH4O90RYJvgU+sKAgeIAHjRzyuxuS2x2NYXxNdt/
 80Pg4YNk7rYEGYV/H9MVpqH4xpyAUJ1kjporgxDsm2GF4M3DQSjFX4
 RlaWEhiXPal6LsHDb09Zf+cQV9kYsWw8B3ohUIxkHYWgbMKAFe5/tN
 2O13e/boPBJdRvhqK3Oz/s2kd/fbtrv/n5YH/n9WWz/rZ78PNlc969
 2d/dOXx3sPvqspmHuy8Pdo8OL5v28u27vVfMwgflIHK+E6jGZYl8LD
 XhQV+zhKlBN0fsluKVe+ahvemYsPN2D4afZXxSZcdltVLzshmqmFjf
 XGm325trWxuu02h8s7nlNjc3tlsr28li4lJ4XDdcOg1LhK1vNrFEwD
 +PsUJwwZ8FxohWr217bQwMIIjIqkEHdVVEq7gbkvGKDJXGEl+21uMN
 iolNzXpzYO/v/AVekVsg0dtrW0j09trj2toKUj0+vlHoQi5iHnvsh2
 fnf1/d+sdTCRVoA+Z+cgeUCsKLfoszmGpATL8Lvd6g67W84YX0Wul6
 E/3MiA0r71cvXYRuFwsoK+/XL10U+aFetHHpIumSJqbNSxeRg8bJ22
 J//T2ZJplgkm84O8RGZUSPjckADGMybAqT+L+sgS+/tOHLfne4a6+v
 vdg7svdeHZrUV1ffvX4tvv1WrCyMX/1q73Dnxetd+8XBzzuvXu4cHt
 n/9W7v4Cdj9Wq6JxnTDzXHvFChYKv1uPX4sbu+7m6tNRrONvxvbXuz
 ueFsjAsFCTCJCJB4iz60tvEYfQj/rG+yD0El0RJnPng9qq7tontDRw
 7VT3XUDzlfYhnIOxkLkU/hdN7viC2aN3/VxBsQ0wI5HPmbxIcA0Uoo
 mwVuq+t4vSpqf9DpA4olw16kbRFxUA7YyWVcJDwT/1r5zbAQRAClUU
 0EraextIizGwO0MVhj22HPBZMdVgEvKpOM/fDHve+PFgxgQQvmEgE+
 RsgqusbL/VeRexzsvny9s7dfE/MIHv4AahOA1xHVP+8cvLF/xn8vd6
 tBa2Ehq/xaXswoN7CqgJIyPPFH3Xb/a1TFYACdxAisJhB93PboipYX
 QOEK4oVqFmpEyGeIlUviNEAluw7kTaz6o+0eTJuArI+VHZTeEPicDu
 2AQQZtZMHKKJDI/13nlPQUomxr4CaGPH6LqSSgMldq6jcVUbSdnOAO
 VNAb2LhrZ4cnEJjbylZqnLoGx4DvzO0K+q5hM4PPzmnClhBc0iBIuU
 nwBJqAJdUI0+fnNewrME62SqDsEIq+KkMVz58njW2iQFKOOuo3IWrY
 Tui1q1H4FYvwVRNjHNdIiEmvakPRNOQtuehRJt61lcdULqytrmzUVr
 fNuKH82My7aXIS6QFcSCw6wfHACCYmyzpUaBJjW7M2LA0TGSXKBoam
 cWWnnRjsnNvUMhmZIxaUImzGvudElrhTsMOuPxSL+F3LcJPUBxPgME
 qDNQI1oEEyG1I0cAZtEUANWlcCihz4AUQFuX7RHzj/HLnjI2qGYBdx
 PD70TDCcpykYSqW4+0orYe780Ldp/KxXhb8L9edqRkIVfW6LnkEuxd
 hLhC8k59hqp/iZWEm8A5upCc8Y1DKNu9MgAPNujo6rj76CXlqLXHzV
 7b5HYZs/JU1ftf+7/yhL5LbdAUu37ZqpOlIYr0x4MlaOVZgJ1BvzPW
 IGR8S3Bj1LEga9WVqCeUtLmVkiKyXG6y2wcqm5eFpUHwqQTshRIuM9
 ipcCVZwd/RoC4RfS9m0vVDqqkmsA8dnJTX3i+vieUhBv8tPe/g/fv4
 HKkrRx4oT6pECdUpk7OGOUNFFpCxnsqg9YFIisvrv35ped1xPmNQMI
 8hnvf8uUFkMlJfn+6WiAyQyPRi6qmMYWsPRYEh5UDUoh8JjKPepDOQ
 gA/vqr1t8fJGx1YnNhnDIKzocU7t0acvIEfENjfjLeUZKfmA5q7Lua
 gT9Y1DqbK7vUIj5Olnk18fYH+/XuL7uv7Y2frhRj6UOx0x1y+sUgBy
 VUAIF2vMI+SEktv3/mQmQwtBSdnWl1fLAG/ji56zC9tJQdMWRAahy3
 hu/tqPwehFjMV0GckCXAQeS+cRZhCoLaHkxU7zZknRMAJVW9MAkElW
 RYjAEQrf6jn23wgKphDpeCoO0l7gxiCbP+PFGNqA+bp80pMwxHLrUT
 Geu5dKkJs9NI7plczVTnJcUQUzLwqGrojzFctV0nT4FULpI9BlkrWP
 H7mi7C+Pfvtl8KHpdxMx4iMpZqPrXdXE6Jdo3xUzP7OPVZFH/Gdsvr
 0/mP4G0V5I3ObtShucOnCm4Lj2UmQtPbVeeBR7vZeBj0fkAr1RaWPs
 CiBg86gokQoYc5w0VD6CSiKyOhL8+CcD9en7WR/o+9/vFEiHhQQ7qp
 B+7AD+T1jfELJgJ7iXv1dMxC3NCFBM1g6I+CFgyOOh3sXv2JkKJ7EM
 gW3gghsOZlAgq4dHg4EZK5BP7xARqrlqW53PHcbjuU7iDUbvd4iOp0
 kK4sjJ+XdUAhP2To2cFJPHs2cZdvfn6yH0zwPgSNYA92j2wqpPZeAf
 iDnf2JkUR9uG85U5uAly/I3B7QCd/M7mNTdIoGmZ9Oew7WbC2ni52Z
 DhFXowuFj4dKmJjQzKrQxOFy0CyXH3t/u3oUTxJXi2BcJWKpTxSuox
 qDgoa01MideMOHfQPi8xV5Ph31tciqROgkcWeXEuozqQDJBDApX8Xz
 lPveo0N32uxnm8XM/FUbd07aPr6CIleZd1tgl8xneI1LclUiR+myKL
 OUwN5QtoIeNIJ68lNq/MZj+TcsPnvjzR1m77yl+srftbWTseXwwdsF
 k7cssnYs4n2vgT9j/ZXXZh9uDPB4P8xeEdscgh/4YMyMbdTJ7j+1ya
 G2PuKm/oUihvbgSXTYFH6RqqUzy1DUu9HnxkFT2MJ7ghy45plBcI6t
 DRXN7MEwqKKi689xCwL6H6yn/U6V5y5kRSWN9vudd6+PMjjaO7QhW/
 zwZveVBNPoun0z1AGDcbBqHqW0L8alNHkYl8ppV1pMh3IfttQ4mvsw
 AOqY7sNWU0yfqIgM/etNuUgDYjlSwaSdNexS+fIS96p11bIildymRg
 EyI7xJjObGWmJEb5HVTAkkklOs9l4UeEOUbgQgeXi36Gt1E2MApSjt
 buAuE0KXdzbU3oe8hWzAktsgOKnj490MCaYOvtb26I4i1e3QtHtQ7r
 WfmKtjkIRYXaDLIphtEYqMDbj34nDpZ+584atGAsAaA0C2cJ+s6bp9
 utVk3K4985zxl5cS4ODDt7XwvCCJa51xvQVc5sUWedMZ0feBdqeL19
 IuskhJI/P4wAl3nYZ4k60xVlby/p/Y338HptvF5gjvuJ316s97vZEt
 +yiM36H7T8QJ9h1ciD4QSBfhj+PYO86oi1f6iI/A/RoFHmAL0HSH50
 i5w1OoZ+Eb6HLHUcZ7kGoM4FgJ860/B+8OeHhX2xlAJ+HQzfhRt01d
 1NCjm0AxeEhVQ/zon7tnuJNPjKIBhFhYnYLknQD6COfY8fp4l9Wnq3
 1050ycu90uXeQ3wIFttkZBgFf1lJExIyDGFr6hUujEo+vteEm7deK2
 TtmQyVFi0AZugI0ONvShLzsmZyj4ypLPN+uhS3Sd0OMLYmASIajLaf
 qjYeI8clG7mm5XycNIbKANvIlI3GpmoR2VM+N+6Q1H1CmFMSsyWqMe
 6O89ya86nxBoInwoGfGOPaQ7OcCb28mIlDwepF1xnG5DQ2k3oy1ICS
 V7Z/yyfedjHwQJ8htbyaUqoAbvIAFUTtKps7D0CrmbZQTWRLCPzeYT
 IazfUpCejlvDByFJEY6dHtLRbLLokMtHye4ivlansVRRFZ/XJv22Bh
 dxzieXA2Ohxf9jiWdjThNTh73Yaqb+AxQyN2k1NTEfQ5Q6lB7T9rDu
 ZVeeoXLzMDSxNJ7aU+SZu3qp3J7cW5a2Cgb8JOmRfDP4Mp/MalT0cS
 2S5QJez28Nu+kOJXaQqk6d0yfhZgcj/6rj79UtPv5ef7xe+wYPv/8d
 zPLTcsLMS+10ae5JNE99AnWCZB60E5Z5SS1dskl8kj30OKy8p/wkIz
 DF0couMIU3dbwo8arfbZcy6hi+oriXfGvEPfyPAdYaa5uNVbwcMFex
 rIJVKlvTRatczll38aGUtwp5azpnTcFDySpPWXl4nrEq8AAjMAEeZq
 05+C5YxZyVh5Fpa4aAlHgQJsM3/KTlBVg/ZxVgsKzgw1tYAt/wPGfd
 gGnwCv7hSM66jwQUGLuGSajLPMjjZWuWoWmwABMmA03XrBJM5lVENo
 K9JfFeY15KOEEuh3HCUlJrSzACEyrWNWYZHhhansZh5Jo1y4TxK2bz
 gUF5SXFkCqRoTTP7+I1iQTmwKFi8/FDGaUWmjQlgOfBP/iZ+K3qccM
 0wBKCNJwBJZXqYs26XEF2ZBFLI0nhpKmfdsPIkHJZJCSZ8xEqhCQkI
 kyeQlMolnFCWKHKzi4p3+HfXupfPAS5rioSjFcFvc7k79CoaUeMP8x
 ZY5IOcZYFZohnkZnDEKlSsWwXCnstZW9bUnHU9hmvsOOpl1ro+m5sr
 WVbJmh0zbSZ7PAcUAugi0pOb4meiHAljIklBFWmcViWhGjTCnFWRhi
 oNkv1OukCuQgwWUwZWy2dYHbqSdhl4BiJuI/ZrbIosNPamGSJGeyJb
 Wpn8FLEnPbHM1gUGr9yhpN1Tf4PtAUBGQe4T+Q4Bl75GDjjNhs1A8h
 QuYBpA0EtYFEQnrL3OC02C4a32RMN+UKSf0lsOO8wpKWImLz3ipik9
 7ewMhwGWLIiQABweplNynh2j34fIRXLynxjXDArhlg4vJtkFmlO07s
 4o+RuCAi4eEbS7U2gqeW0hWv4q3n5GP+/pSKKjB4WsR0XDwDgEzdDa
 PAWiKesWyy0vLbNIdN6EwU/igwXrOgyK+GDF+ggG/xQfLBPMh/HBIo
 n6Ui+QdhghBY6KFMNvKKv4TLq8NZURQ6ypMeP5pMtnTytnj+dKaZcn
 Syhy/qrQtCnrDjzPkn9NW3dT3nQzZSQ4/7pcC953fVxgYQOLQ7tDia
 9EE25pf+Q8y7quQAqghSn93spT3MjScvQqpevoVYbG1auU3uHVTJaD
 fGQ4yO28+s5bN4mRW5py5oWtGt4WctMkeSmTeTWBXR6yrZ6QsJMooR
 hGYgyOtRBjTjljMMs2tNbYPG5aBSifpikQlcg77hE7iA6Tyw3JI83k
 +EDAb2u5yfqBQ4dkcI7mAK6bKOccsD+lqCoX0CZR+FPKPtmtTCsiaB
 WS0o0pigyzUVRf5hqGKx/4hppNWql1n4LJtCZJhywVf5bYqjlA5aWn
 PKTwNaej0EdqudIvOXvOmiZFcC1XAAdBpDeZixvRfGYNXoHwP46hg1
 U4s8ghF5KvEsVHmoCCgsbPDJDjISnoLov9js59uZtFC/JoiVXMz5Sg
 y6bitKwYGgO5JqVdZC1PWbM8fi8qiW+TzG8wnBIyhZpVorilUdxTPq
 UcPMKieAGWF+BnkYxcOVeZH9jyiRIug6tcEhSQ5rspqkpcP9MzsCn9
 cVaHbuCFEpMyOZw/ja+QkVKuwuGR3B/LMDDOshzkSrWID5pU0KbkF2
 w7H1EehQhuBKbJee8XKDurcRDCXZJGSQ/mcgXClTcG0Wjz6ML3dehj
 G+YiagoTU145112Sw0MS0UM2v9sqUKta/QuTZs7OJVkMS7EYxQzQU0
 Wyrc/JbL4oWg2m7Q4RMC2rRLAxkPm1ovVgihx52vrqUiy6I1A1GwCZ
 AiAkK0bK5v2lRmrEmRJJSUcMlMM1WWbUDI2jeAvWfaJKpEnikMUkqa
 KrFCfgc8AuLcH6XEqbvLVC4Wuawpfu3a6h9B7qaWA8UXwzAxpiuWdC
 o/KmzK2NqjMLBlMZkw1T53hSukG4VBcTvVUemi7CVxTYGDQlqAdpue
 lCiM2PjLNIrXdeTZOdeCTDDMhfTiUTBwjkq6L1gn/qVj2PC+d1bcAP
 OoYDbaxHLq21HnnCjDWP0kCRlmasl7o2K8hKUnmQ9fmMana4JyVL+J
 IfuCJVzIKhPqpYL6bi4mW/TpUQ81mMz+SxmZVeVrQWsZe3llKZpcTy
 VNqH8SoVCZAyiroRBgknMiB7orKQ7SI5CD2vaO3ogpxzDac/kEDOWi
 ghqDK9+s+itUrav29qn8tLTlj0c4a3XGgQYpRgjSt/eUjPMqbldETF
 VZ9dgeXPaPy6TiW3Iqsbl63uXD3gp3KHYaiqVknPmcatg+SERGOr84
 5OcGXrdlRHIS+UlLGAKfKmk3LbGSOMfFLC2PgRq/i6tNsSt3g3DDsn
 C2S8D9WSYtR/4UM5H8WNj4mk6awC+75+lSqw7+lXqQL7U/0qVWA/0K
 9S3vGASS2BR0sKS4rC27GAozKFCmLcZ3GUBsY/UcsLvI2jnqcJPmB/
 lDcyJgcW3bQqN7zNmRrjAFVl3GJP575U6Tjq8uRmDvzMfazfataMVJ
 vo2T+t5D4x58ueV+3j6cxrAHlg5MrP84aN6XRzW5HKHk383tKpjct+
 fmAU7OnkPjfj5M1yoQWTeVeEti/ukumiTRZRsHI3QO8WKjHeABpYPt
 TLlFQ9ic3RjHXNbI215HVJwP2RxqvVBCMV6+OK9UihLhlk3CkrTenC
 xlC9bB8+jmukjBsX4CMPaPBTjsN6i4Yadp0o54zQXWTnArOUvo9CqB
 gTbkzjM8MpGWTcNMcTPks83jIAQl7DnzfVFmtJZrpMZ0SMtyVtuFEc
 L7Pv5VUZXJDdAXZMCtdNjuoF3uM12FROdz+R7jUczvXK7+AZEF3Hgp
 iDkqEdjts6aOt6T8ftgiz778QDuNpmyc2kOxeuzK8A80YhY3P+fi7e
 77CIqJL5uEitaCqMP8jFk05iCcTnHCqOzIl7hNwUdtC5PD7/P8sMFP
 0VUAAAAQrFAzw/eG1sIHZlcnNpb249IjEuMCIgZW5jb2Rpbmc9InV0
 Zi0xNiI/Pg0KPEVtYWlsU2V0Pg0KICA8VmVyc2lvbj4xNS4wLjAuMD
 wvVmVyc2lvbj4NCiAgPEVtYWlscz4NCiAgICA8RW1haWwgU3RhcnRJ
 bmRleD0iMjEiPg0KICAgICAgPEVtYWlsU3RyaW5nPmJyaWplc2guc2
 luZ2hAYW1kLmNvbTwvRW1haWxTdHJpbmc+DQogICAgPC9FbWFpbD4N
 CiAgICA8RW1haWwgU3RhcnRJbmRleD0iMTAxMyIgUG9zaXRpb249Ik
 90aGVyIj4NCiAgICAgIDxFbWFpbFN0cmluZz5taWNoYWVsLnJvdGhA
 YW1kLmNvbTwvRW1haWxTdHJpbmc+DQogICAgPC9FbWFpbD4NCiAgIC
 A8RW1haWwgU3RhcnRJbmRleD0iMTExNyIgUG9zaXRpb249Ik90aGVy
 Ij4NCiAgICAgIDxFbWFpbFN0cmluZz5hc2hpc2gua2FscmFAYW1kLm
 NvbTwvRW1haWxTdHJpbmc+DQogICAgPC9FbWFpbD4NCiAgPC9FbWFp
 bHM+DQo8L0VtYWlsU2V0PgEOzwFSZXRyaWV2ZXJPcGVyYXRvciwxMC
 wwO1JldHJpZXZlck9wZXJhdG9yLDExLDE7UG9zdERvY1BhcnNlck9w
 ZXJhdG9yLDEwLDA7UG9zdERvY1BhcnNlck9wZXJhdG9yLDExLDA7UG
 9zdFdvcmRCcmVha2VyRGlhZ25vc3RpY09wZXJhdG9yLDEwLDU7UG9z
 dFdvcmRCcmVha2VyRGlhZ25vc3RpY09wZXJhdG9yLDExLDA7VHJhbn
 Nwb3J0V3JpdGVyUHJvZHVjZXIsMjAsMTU=
X-MS-Exchange-Forest-IndexAgent: 1 7193
X-MS-Exchange-Forest-EmailMessageHash: 4C0A0896
X-MS-Exchange-Forest-Language: en
X-MS-Exchange-Organization-Processed-By-Journaling: Journal Agent
X-MS-Exchange-Organization-Transport-Properties: DeliveryPriority=Low
X-MS-Exchange-Organization-Prioritization: 2:RC:REDACTED-af51df60fd698f80b064826f9ee192ca@secunet.com:86/10|SR
X-MS-Exchange-Organization-IncludeInSla: False:RecipientCountThresholdExceeded

From: Brijesh Singh <brijesh.singh@amd.com>

A key aspect of a launching an SNP guest is initializing it with a
known/measured payload which is then encrypted into guest memory as
pre-validated private pages and then measured into the cryptographic
launch context created with KVM_SEV_SNP_LAUNCH_START so that the guest
can attest itself after booting.

Since all private pages are provided by guest_memfd, make use of the
kvm_gmem_populate() interface to handle this. The general flow is that
guest_memfd will handle allocating the pages associated with the GPA
ranges being initialized by each particular call of
KVM_SEV_SNP_LAUNCH_UPDATE, copying data from userspace into those pages,
and then the post_populate callback will do the work of setting the
RMP entries for these pages to private and issuing the SNP firmware
calls to encrypt/measure them.

For more information see the SEV-SNP specification.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Co-developed-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/x86/amd-memory-encryption.rst    |  39 ++++
 arch/x86/include/uapi/asm/kvm.h               |  15 ++
 arch/x86/kvm/svm/sev.c                        | 211 ++++++++++++++++++
 3 files changed, 265 insertions(+)

diff --git a/Documentation/virt/kvm/x86/amd-memory-encryption.rst b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
index a10b817c162d..4268aa5c380e 100644
--- a/Documentation/virt/kvm/x86/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
@@ -478,6 +478,45 @@ Returns: 0 on success, -negative on error
 
 See the SEV-SNP spec [snp-fw-abi]_ for further detail on the launch input.
 
+19. KVM_SEV_SNP_LAUNCH_UPDATE
+-----------------------------
+
+The KVM_SEV_SNP_LAUNCH_UPDATE command is used for loading userspace-provided
+data into a guest GPA range, measuring the contents into the SNP guest context
+created by KVM_SEV_SNP_LAUNCH_START, and then encrypting/validating that GPA
+range so that it will be immediately readable using the encryption key
+associated with the guest context once it is booted, after which point it can
+attest the measurement associated with its context before unlocking any
+secrets.
+
+It is required that the GPA ranges initialized by this command have had the
+KVM_MEMORY_ATTRIBUTE_PRIVATE attribute set in advance. See the documentation
+for KVM_SET_MEMORY_ATTRIBUTES for more details on this aspect.
+
+Parameters (in): struct  kvm_sev_snp_launch_update
+
+Returns: 0 on success, -negative on error
+
+::
+
+        struct kvm_sev_snp_launch_update {
+                __u64 gfn_start;        /* Guest page number to load/encrypt data into. */
+                __u64 uaddr;            /* Userspace address of data to be loaded/encrypted. */
+                __u32 len;              /* 4k-aligned length in bytes to copy into guest memory.*/
+                __u8 type;              /* The type of the guest pages being initialized. */
+        };
+
+where the allowed values for page_type are #define'd as::
+
+	KVM_SEV_SNP_PAGE_TYPE_NORMAL
+	KVM_SEV_SNP_PAGE_TYPE_ZERO
+	KVM_SEV_SNP_PAGE_TYPE_UNMEASURED
+	KVM_SEV_SNP_PAGE_TYPE_SECRETS
+	KVM_SEV_SNP_PAGE_TYPE_CPUID
+
+See the SEV-SNP spec [snp-fw-abi]_ for further details on how each page type is
+used/measured.
+
 Device attribute API
 ====================
 
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 350ddd5264ea..956eb548c08e 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -695,6 +695,7 @@ enum sev_cmd_id {
 
 	/* SNP-specific commands */
 	KVM_SEV_SNP_LAUNCH_START,
+	KVM_SEV_SNP_LAUNCH_UPDATE,
 
 	KVM_SEV_NR_MAX,
 };
@@ -826,6 +827,20 @@ struct kvm_sev_snp_launch_start {
 	__u8 gosvw[16];
 };
 
+/* Kept in sync with firmware values for simplicity. */
+#define KVM_SEV_SNP_PAGE_TYPE_NORMAL		0x1
+#define KVM_SEV_SNP_PAGE_TYPE_ZERO		0x3
+#define KVM_SEV_SNP_PAGE_TYPE_UNMEASURED	0x4
+#define KVM_SEV_SNP_PAGE_TYPE_SECRETS		0x5
+#define KVM_SEV_SNP_PAGE_TYPE_CPUID		0x6
+
+struct kvm_sev_snp_launch_update {
+	__u64 gfn_start;
+	__u64 uaddr;
+	__u32 len;
+	__u8 type;
+};
+
 #define KVM_X2APIC_API_USE_32BIT_IDS            (1ULL << 0)
 #define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK  (1ULL << 1)
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 6c7c77e33e62..a8a8a285b4a4 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -247,6 +247,35 @@ static void sev_decommission(unsigned int handle)
 	sev_guest_decommission(&decommission, NULL);
 }
 
+static int snp_page_reclaim(u64 pfn)
+{
+	struct sev_data_snp_page_reclaim data = {0};
+	int err, rc;
+
+	data.paddr = __sme_set(pfn << PAGE_SHIFT);
+	rc = sev_do_cmd(SEV_CMD_SNP_PAGE_RECLAIM, &data, &err);
+	if (WARN_ON_ONCE(rc)) {
+		/*
+		 * This shouldn't happen under normal circumstances, but if the
+		 * reclaim failed, then the page is no longer safe to use.
+		 */
+		snp_leak_pages(pfn, 1);
+	}
+
+	return rc;
+}
+
+static int host_rmp_make_shared(u64 pfn, enum pg_level level, bool leak)
+{
+	int rc;
+
+	rc = rmp_make_shared(pfn, level);
+	if (rc && leak)
+		snp_leak_pages(pfn, page_level_size(level) >> PAGE_SHIFT);
+
+	return rc;
+}
+
 static void sev_unbind_asid(struct kvm *kvm, unsigned int handle)
 {
 	struct sev_data_deactivate deactivate;
@@ -2075,6 +2104,185 @@ static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return rc;
 }
 
+struct sev_gmem_populate_args {
+	__u8 type;
+	int sev_fd;
+	int fw_error;
+};
+
+static int sev_gmem_post_populate(struct kvm *kvm, struct kvm_memory_slot *slot,
+				  gfn_t gfn_start, kvm_pfn_t pfn, void __user *src,
+				  int order, void *opaque)
+{
+	struct sev_gmem_populate_args *sev_populate_args = opaque;
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	int npages = (1 << order);
+	int n_private = 0;
+	int ret, i;
+	gfn_t gfn;
+
+	pr_debug("%s: gfn_start %llx pfn_start %llx npages %d\n",
+		 __func__, gfn_start, pfn, npages);
+
+	for (gfn = gfn_start, i = 0; gfn < gfn_start + npages; gfn++, i++) {
+		struct sev_data_snp_launch_update fw_args = {0};
+		bool assigned;
+		int level;
+
+		if (!kvm_mem_is_private(kvm, gfn)) {
+			pr_debug("%s: Failed to ensure GFN 0x%llx has private memory attribute set\n",
+				 __func__, gfn);
+			ret = -EINVAL;
+			break;
+		}
+
+		ret = snp_lookup_rmpentry((u64)pfn + i, &assigned, &level);
+		if (ret || assigned) {
+			pr_debug("%s: Failed to ensure GFN 0x%llx RMP entry is initial shared state, ret: %d assigned: %d\n",
+				 __func__, gfn, ret, assigned);
+			break;
+		}
+
+		ret = rmp_make_private(pfn + i, gfn << PAGE_SHIFT, PG_LEVEL_4K,
+				       sev_get_asid(kvm), true);
+		if (ret) {
+			pr_debug("%s: Failed to convert GFN 0x%llx to private, ret: %d\n",
+				 __func__, gfn, ret);
+			break;
+		}
+
+		n_private++;
+
+		fw_args.gctx_paddr = __psp_pa(sev->snp_context);
+		fw_args.address = __sme_set(pfn_to_hpa(pfn + i));
+		fw_args.page_size = PG_LEVEL_TO_RMP(PG_LEVEL_4K);
+		fw_args.page_type = sev_populate_args->type;
+		ret = __sev_issue_cmd(sev_populate_args->sev_fd, SEV_CMD_SNP_LAUNCH_UPDATE,
+				      &fw_args, &sev_populate_args->fw_error);
+		if (ret) {
+			pr_debug("%s: SEV-SNP launch update failed, ret: 0x%x, fw_error: 0x%x\n",
+				 __func__, ret, sev_populate_args->fw_error);
+
+			if (snp_page_reclaim(pfn + i))
+				break;
+
+			/*
+			 * When invalid CPUID function entries are detected,
+			 * firmware writes the expected values into the page and
+			 * leaves it unencrypted so it can be used for debugging
+			 * and error-reporting.
+			 *
+			 * Copy this page back into the source buffer so
+			 * userspace can use this information to provide
+			 * information on which CPUID leaves/fields failed CPUID
+			 * validation.
+			 */
+			if (sev_populate_args->type == KVM_SEV_SNP_PAGE_TYPE_CPUID &&
+			    sev_populate_args->fw_error == SEV_RET_INVALID_PARAM) {
+				void *vaddr;
+
+				host_rmp_make_shared(pfn + i, PG_LEVEL_4K, true);
+				vaddr = kmap_local_pfn(pfn + i);
+
+				if (copy_to_user(src + i * PAGE_SIZE,
+						 vaddr, PAGE_SIZE))
+					pr_debug("Failed to write CPUID page back to userspace\n");
+
+				kunmap_local(vaddr);
+			}
+
+			break;
+		}
+	}
+
+	if (ret) {
+		pr_debug("%s: exiting with error ret %d, undoing %d populated gmem pages.\n",
+			 __func__, ret, n_private);
+		for (i = 0; i < n_private; i++)
+			host_rmp_make_shared(pfn + i, PG_LEVEL_4K, true);
+	}
+
+	return ret;
+}
+
+static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_gmem_populate_args sev_populate_args = {0};
+	struct kvm_gmem_populate_args populate_args = {0};
+	struct kvm_sev_snp_launch_update params;
+	struct kvm_memory_slot *memslot;
+	unsigned int npages;
+	int ret = 0;
+
+	if (!sev_snp_guest(kvm) || !sev->snp_context)
+		return -EINVAL;
+
+	if (copy_from_user(&params, u64_to_user_ptr(argp->data), sizeof(params)))
+		return -EFAULT;
+
+	if (!IS_ALIGNED(params.len, PAGE_SIZE) ||
+	    (params.type != KVM_SEV_SNP_PAGE_TYPE_NORMAL &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_ZERO &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_UNMEASURED &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_SECRETS &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_CPUID))
+		return -EINVAL;
+
+	npages = params.len / PAGE_SIZE;
+
+	pr_debug("%s: GFN range 0x%llx-0x%llx type %d\n", __func__,
+		 params.gfn_start, params.gfn_start + npages, params.type);
+
+	/*
+	 * For each GFN that's being prepared as part of the initial guest
+	 * state, the following pre-conditions are verified:
+	 *
+	 *   1) The backing memslot is a valid private memslot.
+	 *   2) The GFN has been set to private via KVM_SET_MEMORY_ATTRIBUTES
+	 *      beforehand.
+	 *   3) The PFN of the guest_memfd has not already been set to private
+	 *      in the RMP table.
+	 *
+	 * The KVM MMU relies on kvm->mmu_invalidate_seq to retry nested page
+	 * faults if there's a race between a fault and an attribute update via
+	 * KVM_SET_MEMORY_ATTRIBUTES, and a similar approach could be utilized
+	 * here. However, kvm->slots_lock guards against both this as well as
+	 * concurrent memslot updates occurring while these checks are being
+	 * performed, so use that here to make it easier to reason about the
+	 * initial expected state and better guard against unexpected
+	 * situations.
+	 */
+	mutex_lock(&kvm->slots_lock);
+
+	memslot = gfn_to_memslot(kvm, params.gfn_start);
+	if (!kvm_slot_can_be_private(memslot)) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	sev_populate_args.sev_fd = argp->sev_fd;
+	sev_populate_args.type = params.type;
+
+	populate_args.opaque = &sev_populate_args;
+	populate_args.gfn = params.gfn_start;
+	populate_args.src = u64_to_user_ptr(params.uaddr);
+	populate_args.npages = npages;
+	populate_args.do_memcpy = params.type != KVM_SEV_SNP_PAGE_TYPE_ZERO;
+	populate_args.post_populate = sev_gmem_post_populate;
+
+	ret = kvm_gmem_populate(kvm, memslot, &populate_args);
+	if (ret) {
+		argp->error = sev_populate_args.fw_error;
+		pr_debug("%s: kvm_gmem_populate failed, ret %d\n", __func__, ret);
+	}
+
+out:
+	mutex_unlock(&kvm->slots_lock);
+
+	return ret;
+}
+
 int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -2165,6 +2373,9 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_SNP_LAUNCH_START:
 		r = snp_launch_start(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_SNP_LAUNCH_UPDATE:
+		r = snp_launch_update(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
-- 
2.25.1


X-sender: <linux-crypto+bounces-3089-steffen.klassert=secunet.com@vger.kernel.org>
X-Receiver: <steffen.klassert@secunet.com> ORCPT=rfc822;steffen.klassert@secunet.com
X-CreatedBy: MSExchange15
X-HeloDomain: mbx-essen-01.secunet.de
X-ExtendedProps: BQBjAAoAqEmmlidQ3AgFADcAAgAADwA8AAAATWljcm9zb2Z0LkV4Y2hhbmdlLlRyYW5zcG9ydC5NYWlsUmVjaXBpZW50Lk9yZ2FuaXphdGlvblNjb3BlEQAAAAAAAAAAAAAAAAAAAAAADwA/AAAATWljcm9zb2Z0LkV4Y2hhbmdlLlRyYW5zcG9ydC5EaXJlY3RvcnlEYXRhLk1haWxEZWxpdmVyeVByaW9yaXR5DwADAAAATG93
X-Source: SMTP:Default MBX-ESSEN-02
X-SourceIPAddress: 10.53.40.197
X-EndOfInjectedXHeaders: 24458
Received: from mbx-essen-01.secunet.de (10.53.40.197) by
 mbx-essen-02.secunet.de (10.53.40.198) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.37; Sat, 30 Mar 2024 00:00:40 +0100
Received: from a.mx.secunet.com (62.96.220.36) by cas-essen-02.secunet.de
 (10.53.40.202) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35 via Frontend
 Transport; Sat, 30 Mar 2024 00:00:40 +0100
Received: from localhost (localhost [127.0.0.1])
	by a.mx.secunet.com (Postfix) with ESMTP id 69546208B4
	for <steffen.klassert@secunet.com>; Sat, 30 Mar 2024 00:00:40 +0100 (CET)
X-Virus-Scanned: by secunet
X-Spam-Flag: NO
X-Spam-Score: -5.15
X-Spam-Level:
X-Spam-Status: No, score=-5.15 tagged_above=-999 required=2.1
	tests=[BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.099, DKIM_SIGNED=0.1,
	DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1,
	HEADER_FROM_DIFFERENT_DOMAINS=0.249, MAILING_LIST_MULTI=-1,
	RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001]
	autolearn=ham autolearn_force=no
Authentication-Results: a.mx.secunet.com (amavisd-new);
	dkim=pass (1024-bit key) header.d=amd.com
Received: from a.mx.secunet.com ([127.0.0.1])
	by localhost (a.mx.secunet.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 44Cyap1acLTm for <steffen.klassert@secunet.com>;
	Sat, 30 Mar 2024 00:00:37 +0100 (CET)
Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=139.178.88.99; helo=sv.mirrors.kernel.org; envelope-from=linux-crypto+bounces-3089-steffen.klassert=secunet.com@vger.kernel.org; receiver=steffen.klassert@secunet.com 
DKIM-Filter: OpenDKIM Filter v2.11.0 a.mx.secunet.com B3F592087B
Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [139.178.88.99])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by a.mx.secunet.com (Postfix) with ESMTPS id B3F592087B
	for <steffen.klassert@secunet.com>; Sat, 30 Mar 2024 00:00:36 +0100 (CET)
Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by sv.mirrors.kernel.org (Postfix) with ESMTPS id 2F078282B3A
	for <steffen.klassert@secunet.com>; Fri, 29 Mar 2024 23:00:35 +0000 (UTC)
Received: from localhost.localdomain (localhost.localdomain [127.0.0.1])
	by smtp.subspace.kernel.org (Postfix) with ESMTP id 7221B13E413;
	Fri, 29 Mar 2024 23:00:15 +0000 (UTC)
Authentication-Results: smtp.subspace.kernel.org;
	dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="nc26yfw6"
X-Original-To: linux-crypto@vger.kernel.org
Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2041.outbound.protection.outlook.com [40.107.244.41])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A03713CF91;
	Fri, 29 Mar 2024 23:00:13 +0000 (UTC)
Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.41
ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116;
	t=1711753215; cv=fail; b=fjdA/KPuC/sIeqaVY0oi6TFRyIES80fOAdBPMtaMQ9A82xaK722jPMpDP5uRpw1TDRnbpyFznjP9/Wx21YefB827hdm3kEvIx74zjcXiunSTLqHcgzJwIztYjf1ofsZc2kKi6AWLKfuspBDhUx8scQLyGT8+MjjyUfS7WXaUfwc=
ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org;
	s=arc-20240116; t=1711753215; c=relaxed/simple;
	bh=GxarBB3QQXDtAmxKX8+rgDQfQVE3hghOjKcRWraa+k4=;
	h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References:
	 MIME-Version:Content-Type; b=HsE8q9F6LYfYnMlfLKuLnv9O+oEuUbw9RNotxN5x8lROSKV36F8erowkx3T8A7TuDXzr6O+kU4CrCEBqJ710cdP0htYrMyVI1mRWa6lOwWkBhSGSmyBwm2ctHe9IpUAvbJoSHIn4mjehfry30ZOKzrAsZESfAH+1dlC89lUeS94=
ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=nc26yfw6; arc=fail smtp.client-ip=40.107.244.41
Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com
Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BElzFv0DBW0MUfZQD8UJs5q9+EA6fwy/roXbmEq+TK27HwkmLuLGaSnZEaC4z7LizwxFP+9CcgtZSYe3Ii/x3Qmyx2+qpZC8UzOKYuThs/JgABKZIcsDlHgXuf91vyYHD+eNeDFavLFzfMdZo2aHfXt6nbKGXqbANG1fHpmqqa/XuV/gj8KYH5rwG+G2KsejSM58/o+SoRJo4tf0r7lMElBZNkVB7ERvDWxQuuE+2+oUQLMCIXrnckx38ToRkbf0LSv3pwmBSoITpf9FxRved2imYa055K8dViM8qFqfybVrwd9UIQYfHaZdKZ1RO+Q8fGV/oNpLYiqpqYBgwaOOtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RRM0X1XPmrPHJ/xejsVZHGx/fhXzSZlUOWEIzd5vxwg=;
 b=napnASBwyJG2prTZxam5z33xxl91ON59wzWY5AlObaNMOHV0cnd5AXmXzslMwl1QfB6IKjCNbGRvKeaSztlCXq1EJmJRKNxP/QGz33VOQxT6Ba77MSmZ9Gvharo7064GKpA8UYIMK8cOKZHtCpeZ7KJoUep/ZNRgl6SMVToBBAzcZaOe+6QjxVtGP0/o8HuSAW/+wy96FFUxfFexc0D6205fpRzXzn5uVuIzoZMwGPSHuh88nLjcONk5HXY29Ev4ytXUbmwKtvwDnx/Q0QAqKpocUh5NtX8894m8J5EScyJE5OVIw+bypKeljQQaBmiv3JPmTl+HzucpKl9Fw5SmNw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RRM0X1XPmrPHJ/xejsVZHGx/fhXzSZlUOWEIzd5vxwg=;
 b=nc26yfw6BEddsMoLxmBknQWv7NoJVdW5TsUPbDB5WD56lIuKOC7ktJ7shNNMaS3nlkqYjDppMlO0nis5MV9UNDgd2MVhA+tVr1V4K8Zjyb8ngbfZX76ombAOPObrwYT1QJa3l86MicBWrhhigVCjwXVrKkwqgm6WOguSMG1TIU0=
Received: from BYAPR02CA0060.namprd02.prod.outlook.com (2603:10b6:a03:54::37)
 by DM6PR12MB4106.namprd12.prod.outlook.com (2603:10b6:5:221::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.41; Fri, 29 Mar
 2024 23:00:10 +0000
Received: from SJ1PEPF00001CDC.namprd05.prod.outlook.com
 (2603:10b6:a03:54:cafe::66) by BYAPR02CA0060.outlook.office365.com
 (2603:10b6:a03:54::37) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.40 via Frontend
 Transport; Fri, 29 Mar 2024 23:00:10 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 SJ1PEPF00001CDC.mail.protection.outlook.com (10.167.242.4) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7409.10 via Frontend Transport; Fri, 29 Mar 2024 23:00:10 +0000
Received: from localhost (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 29 Mar
 2024 18:00:03 -0500
From: Michael Roth <michael.roth@amd.com>
To: <kvm@vger.kernel.org>
CC: <linux-coco@lists.linux.dev>, <linux-mm@kvack.org>,
	<linux-crypto@vger.kernel.org>, <x86@kernel.org>,
	<linux-kernel@vger.kernel.org>, <tglx@linutronix.de>, <mingo@redhat.com>,
	<jroedel@suse.de>, <thomas.lendacky@amd.com>, <hpa@zytor.com>,
	<ardb@kernel.org>, <pbonzini@redhat.com>, <seanjc@google.com>,
	<vkuznets@redhat.com>, <jmattson@google.com>, <luto@kernel.org>,
	<dave.hansen@linux.intel.com>, <slp@redhat.com>, <pgonda@google.com>,
	<peterz@infradead.org>, <srinivas.pandruvada@linux.intel.com>,
	<rientjes@google.com>, <dovmurik@linux.ibm.com>, <tobin@ibm.com>,
	<bp@alien8.de>, <vbabka@suse.cz>, <kirill@shutemov.name>,
	<ak@linux.intel.com>, <tony.luck@intel.com>,
	<sathyanarayanan.kuppuswamy@linux.intel.com>, <alpergun@google.com>,
	<jarkko@kernel.org>, <ashish.kalra@amd.com>, <nikunj.dadhania@amd.com>,
	<pankaj.gupta@amd.com>, <liam.merwick@oracle.com>, Brijesh Singh
	<brijesh.singh@amd.com>
Subject: [PATCH v12 11/29] KVM: SEV: Add KVM_SEV_SNP_LAUNCH_UPDATE command
Date: Fri, 29 Mar 2024 17:58:17 -0500
Message-ID: <20240329225835.400662-12-michael.roth@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240329225835.400662-1-michael.roth@amd.com>
References: <20240329225835.400662-1-michael.roth@amd.com>
Precedence: bulk
X-Mailing-List: linux-crypto@vger.kernel.org
List-Id: <linux-crypto.vger.kernel.org>
List-Subscribe: <mailto:linux-crypto+subscribe@vger.kernel.org>
List-Unsubscribe: <mailto:linux-crypto+unsubscribe@vger.kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ1PEPF00001CDC:EE_|DM6PR12MB4106:EE_
X-MS-Office365-Filtering-Correlation-Id: c27c7199-0608-4e58-824a-08dc5043fc68
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3CPhwmHFRTbcoEsG1eJurOstXAgzf72Ze7qRd5sd1y6H95KtLVWAy00ly7s0y7UOv9/Fa0d8LCHeobZEAV2wMHMBwvDPt5XAgtaP9QWZ+6Pr3bZziTFcZgcMbr04m8pJ+cA9GwKvf6S/OMzrLt1uriEl9sVG2Bf4VpxzIMljD40wtDfDmpx31u8s23d/bIMGv6jQFVj3JsuuUr7+HOjkJs/H2mEbB7cC3VTL0Er3ZMCNT6/L4bqA37k9zeBppe0b5nePPH2UflxHzfh4xMNZ6ttcvsy6mHWLMBExEFEIQhgQ1TOSYMs8niYK+J7Io8C7NWgJtTPqSs9KwMJTb6+9bCVfzJj/ZLfHIJofmlifP5hfGvMx93ymUl3BiC9gNvHQiHNEoEJzJao3IpwR3tvvhnaU3WR7e+uryr7q7iOu9+JcYiXmTkCvbv2nJKeo8lbfwacgdTl6AOMDpdHAitMLR2yKnrPiwH9iJqUttjiJRr0tl6Nw0MUpWYJdQmvO2WKorsDgqWLh1Kh7yCTmsLHo3/F5EQ95LrJUpo/oDYlT4Y8rCFA8M/0cQ/BoREHUAEIJf31yyt0jDtOkCL3L4IuUvAGqIRGTU2nEaPf0gSnJ2SZa7+lm1s/2tb6l75eI+PLBLm7LdN75fg4zFqX9400AtenkM7z03+WcUgV0uQT6QARZn2e8eElJMYIpPr3dLb7aqYFKckjQdmJgdLdQvqbbyTfg7fdW4eBNPlSyVZ81qEcbrDLftDw463QUQ57VRGBO
X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(82310400014)(1800799015)(7416005)(376005)(36860700004);DIR:OUT;SFP:1101;
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Mar 2024 23:00:10.6256
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c27c7199-0608-4e58-824a-08dc5043fc68
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource: SJ1PEPF00001CDC.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4106
Return-Path: linux-crypto+bounces-3089-steffen.klassert=secunet.com@vger.kernel.org
X-MS-Exchange-Organization-OriginalArrivalTime: 29 Mar 2024 23:00:40.4527
 (UTC)
X-MS-Exchange-Organization-Network-Message-Id: a325e09d-2d76-457c-d769-08dc50440e1c
X-MS-Exchange-Organization-OriginalClientIPAddress: 62.96.220.36
X-MS-Exchange-Organization-OriginalServerIPAddress: 10.53.40.202
X-MS-Exchange-Organization-Cross-Premises-Headers-Processed: cas-essen-02.secunet.de
X-MS-Exchange-Organization-OrderedPrecisionLatencyInProgress: LSRV=cas-essen-02.secunet.de:TOTAL-FE=0.006|SMR=0.006(SMRPI=0.005(SMRPI-FrontendProxyAgent=0.005));2024-03-29T23:00:40.459Z
X-MS-Exchange-Forest-ArrivalHubServer: mbx-essen-02.secunet.de
X-MS-Exchange-Organization-AuthSource: cas-essen-02.secunet.de
X-MS-Exchange-Organization-AuthAs: Anonymous
X-MS-Exchange-Organization-OriginalSize: 23913
X-MS-Exchange-Organization-Transport-Properties: DeliveryPriority=Low
X-MS-Exchange-Organization-Prioritization: 2:ShadowRedundancy
X-MS-Exchange-Organization-IncludeInSla: False:ShadowRedundancy

From: Brijesh Singh <brijesh.singh@amd.com>

A key aspect of a launching an SNP guest is initializing it with a
known/measured payload which is then encrypted into guest memory as
pre-validated private pages and then measured into the cryptographic
launch context created with KVM_SEV_SNP_LAUNCH_START so that the guest
can attest itself after booting.

Since all private pages are provided by guest_memfd, make use of the
kvm_gmem_populate() interface to handle this. The general flow is that
guest_memfd will handle allocating the pages associated with the GPA
ranges being initialized by each particular call of
KVM_SEV_SNP_LAUNCH_UPDATE, copying data from userspace into those pages,
and then the post_populate callback will do the work of setting the
RMP entries for these pages to private and issuing the SNP firmware
calls to encrypt/measure them.

For more information see the SEV-SNP specification.

Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Co-developed-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 .../virt/kvm/x86/amd-memory-encryption.rst    |  39 ++++
 arch/x86/include/uapi/asm/kvm.h               |  15 ++
 arch/x86/kvm/svm/sev.c                        | 211 ++++++++++++++++++
 3 files changed, 265 insertions(+)

diff --git a/Documentation/virt/kvm/x86/amd-memory-encryption.rst b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
index a10b817c162d..4268aa5c380e 100644
--- a/Documentation/virt/kvm/x86/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
@@ -478,6 +478,45 @@ Returns: 0 on success, -negative on error
 
 See the SEV-SNP spec [snp-fw-abi]_ for further detail on the launch input.
 
+19. KVM_SEV_SNP_LAUNCH_UPDATE
+-----------------------------
+
+The KVM_SEV_SNP_LAUNCH_UPDATE command is used for loading userspace-provided
+data into a guest GPA range, measuring the contents into the SNP guest context
+created by KVM_SEV_SNP_LAUNCH_START, and then encrypting/validating that GPA
+range so that it will be immediately readable using the encryption key
+associated with the guest context once it is booted, after which point it can
+attest the measurement associated with its context before unlocking any
+secrets.
+
+It is required that the GPA ranges initialized by this command have had the
+KVM_MEMORY_ATTRIBUTE_PRIVATE attribute set in advance. See the documentation
+for KVM_SET_MEMORY_ATTRIBUTES for more details on this aspect.
+
+Parameters (in): struct  kvm_sev_snp_launch_update
+
+Returns: 0 on success, -negative on error
+
+::
+
+        struct kvm_sev_snp_launch_update {
+                __u64 gfn_start;        /* Guest page number to load/encrypt data into. */
+                __u64 uaddr;            /* Userspace address of data to be loaded/encrypted. */
+                __u32 len;              /* 4k-aligned length in bytes to copy into guest memory.*/
+                __u8 type;              /* The type of the guest pages being initialized. */
+        };
+
+where the allowed values for page_type are #define'd as::
+
+	KVM_SEV_SNP_PAGE_TYPE_NORMAL
+	KVM_SEV_SNP_PAGE_TYPE_ZERO
+	KVM_SEV_SNP_PAGE_TYPE_UNMEASURED
+	KVM_SEV_SNP_PAGE_TYPE_SECRETS
+	KVM_SEV_SNP_PAGE_TYPE_CPUID
+
+See the SEV-SNP spec [snp-fw-abi]_ for further details on how each page type is
+used/measured.
+
 Device attribute API
 ====================
 
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 350ddd5264ea..956eb548c08e 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -695,6 +695,7 @@ enum sev_cmd_id {
 
 	/* SNP-specific commands */
 	KVM_SEV_SNP_LAUNCH_START,
+	KVM_SEV_SNP_LAUNCH_UPDATE,
 
 	KVM_SEV_NR_MAX,
 };
@@ -826,6 +827,20 @@ struct kvm_sev_snp_launch_start {
 	__u8 gosvw[16];
 };
 
+/* Kept in sync with firmware values for simplicity. */
+#define KVM_SEV_SNP_PAGE_TYPE_NORMAL		0x1
+#define KVM_SEV_SNP_PAGE_TYPE_ZERO		0x3
+#define KVM_SEV_SNP_PAGE_TYPE_UNMEASURED	0x4
+#define KVM_SEV_SNP_PAGE_TYPE_SECRETS		0x5
+#define KVM_SEV_SNP_PAGE_TYPE_CPUID		0x6
+
+struct kvm_sev_snp_launch_update {
+	__u64 gfn_start;
+	__u64 uaddr;
+	__u32 len;
+	__u8 type;
+};
+
 #define KVM_X2APIC_API_USE_32BIT_IDS            (1ULL << 0)
 #define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK  (1ULL << 1)
 
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 6c7c77e33e62..a8a8a285b4a4 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -247,6 +247,35 @@ static void sev_decommission(unsigned int handle)
 	sev_guest_decommission(&decommission, NULL);
 }
 
+static int snp_page_reclaim(u64 pfn)
+{
+	struct sev_data_snp_page_reclaim data = {0};
+	int err, rc;
+
+	data.paddr = __sme_set(pfn << PAGE_SHIFT);
+	rc = sev_do_cmd(SEV_CMD_SNP_PAGE_RECLAIM, &data, &err);
+	if (WARN_ON_ONCE(rc)) {
+		/*
+		 * This shouldn't happen under normal circumstances, but if the
+		 * reclaim failed, then the page is no longer safe to use.
+		 */
+		snp_leak_pages(pfn, 1);
+	}
+
+	return rc;
+}
+
+static int host_rmp_make_shared(u64 pfn, enum pg_level level, bool leak)
+{
+	int rc;
+
+	rc = rmp_make_shared(pfn, level);
+	if (rc && leak)
+		snp_leak_pages(pfn, page_level_size(level) >> PAGE_SHIFT);
+
+	return rc;
+}
+
 static void sev_unbind_asid(struct kvm *kvm, unsigned int handle)
 {
 	struct sev_data_deactivate deactivate;
@@ -2075,6 +2104,185 @@ static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return rc;
 }
 
+struct sev_gmem_populate_args {
+	__u8 type;
+	int sev_fd;
+	int fw_error;
+};
+
+static int sev_gmem_post_populate(struct kvm *kvm, struct kvm_memory_slot *slot,
+				  gfn_t gfn_start, kvm_pfn_t pfn, void __user *src,
+				  int order, void *opaque)
+{
+	struct sev_gmem_populate_args *sev_populate_args = opaque;
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	int npages = (1 << order);
+	int n_private = 0;
+	int ret, i;
+	gfn_t gfn;
+
+	pr_debug("%s: gfn_start %llx pfn_start %llx npages %d\n",
+		 __func__, gfn_start, pfn, npages);
+
+	for (gfn = gfn_start, i = 0; gfn < gfn_start + npages; gfn++, i++) {
+		struct sev_data_snp_launch_update fw_args = {0};
+		bool assigned;
+		int level;
+
+		if (!kvm_mem_is_private(kvm, gfn)) {
+			pr_debug("%s: Failed to ensure GFN 0x%llx has private memory attribute set\n",
+				 __func__, gfn);
+			ret = -EINVAL;
+			break;
+		}
+
+		ret = snp_lookup_rmpentry((u64)pfn + i, &assigned, &level);
+		if (ret || assigned) {
+			pr_debug("%s: Failed to ensure GFN 0x%llx RMP entry is initial shared state, ret: %d assigned: %d\n",
+				 __func__, gfn, ret, assigned);
+			break;
+		}
+
+		ret = rmp_make_private(pfn + i, gfn << PAGE_SHIFT, PG_LEVEL_4K,
+				       sev_get_asid(kvm), true);
+		if (ret) {
+			pr_debug("%s: Failed to convert GFN 0x%llx to private, ret: %d\n",
+				 __func__, gfn, ret);
+			break;
+		}
+
+		n_private++;
+
+		fw_args.gctx_paddr = __psp_pa(sev->snp_context);
+		fw_args.address = __sme_set(pfn_to_hpa(pfn + i));
+		fw_args.page_size = PG_LEVEL_TO_RMP(PG_LEVEL_4K);
+		fw_args.page_type = sev_populate_args->type;
+		ret = __sev_issue_cmd(sev_populate_args->sev_fd, SEV_CMD_SNP_LAUNCH_UPDATE,
+				      &fw_args, &sev_populate_args->fw_error);
+		if (ret) {
+			pr_debug("%s: SEV-SNP launch update failed, ret: 0x%x, fw_error: 0x%x\n",
+				 __func__, ret, sev_populate_args->fw_error);
+
+			if (snp_page_reclaim(pfn + i))
+				break;
+
+			/*
+			 * When invalid CPUID function entries are detected,
+			 * firmware writes the expected values into the page and
+			 * leaves it unencrypted so it can be used for debugging
+			 * and error-reporting.
+			 *
+			 * Copy this page back into the source buffer so
+			 * userspace can use this information to provide
+			 * information on which CPUID leaves/fields failed CPUID
+			 * validation.
+			 */
+			if (sev_populate_args->type == KVM_SEV_SNP_PAGE_TYPE_CPUID &&
+			    sev_populate_args->fw_error == SEV_RET_INVALID_PARAM) {
+				void *vaddr;
+
+				host_rmp_make_shared(pfn + i, PG_LEVEL_4K, true);
+				vaddr = kmap_local_pfn(pfn + i);
+
+				if (copy_to_user(src + i * PAGE_SIZE,
+						 vaddr, PAGE_SIZE))
+					pr_debug("Failed to write CPUID page back to userspace\n");
+
+				kunmap_local(vaddr);
+			}
+
+			break;
+		}
+	}
+
+	if (ret) {
+		pr_debug("%s: exiting with error ret %d, undoing %d populated gmem pages.\n",
+			 __func__, ret, n_private);
+		for (i = 0; i < n_private; i++)
+			host_rmp_make_shared(pfn + i, PG_LEVEL_4K, true);
+	}
+
+	return ret;
+}
+
+static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	struct sev_gmem_populate_args sev_populate_args = {0};
+	struct kvm_gmem_populate_args populate_args = {0};
+	struct kvm_sev_snp_launch_update params;
+	struct kvm_memory_slot *memslot;
+	unsigned int npages;
+	int ret = 0;
+
+	if (!sev_snp_guest(kvm) || !sev->snp_context)
+		return -EINVAL;
+
+	if (copy_from_user(&params, u64_to_user_ptr(argp->data), sizeof(params)))
+		return -EFAULT;
+
+	if (!IS_ALIGNED(params.len, PAGE_SIZE) ||
+	    (params.type != KVM_SEV_SNP_PAGE_TYPE_NORMAL &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_ZERO &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_UNMEASURED &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_SECRETS &&
+	     params.type != KVM_SEV_SNP_PAGE_TYPE_CPUID))
+		return -EINVAL;
+
+	npages = params.len / PAGE_SIZE;
+
+	pr_debug("%s: GFN range 0x%llx-0x%llx type %d\n", __func__,
+		 params.gfn_start, params.gfn_start + npages, params.type);
+
+	/*
+	 * For each GFN that's being prepared as part of the initial guest
+	 * state, the following pre-conditions are verified:
+	 *
+	 *   1) The backing memslot is a valid private memslot.
+	 *   2) The GFN has been set to private via KVM_SET_MEMORY_ATTRIBUTES
+	 *      beforehand.
+	 *   3) The PFN of the guest_memfd has not already been set to private
+	 *      in the RMP table.
+	 *
+	 * The KVM MMU relies on kvm->mmu_invalidate_seq to retry nested page
+	 * faults if there's a race between a fault and an attribute update via
+	 * KVM_SET_MEMORY_ATTRIBUTES, and a similar approach could be utilized
+	 * here. However, kvm->slots_lock guards against both this as well as
+	 * concurrent memslot updates occurring while these checks are being
+	 * performed, so use that here to make it easier to reason about the
+	 * initial expected state and better guard against unexpected
+	 * situations.
+	 */
+	mutex_lock(&kvm->slots_lock);
+
+	memslot = gfn_to_memslot(kvm, params.gfn_start);
+	if (!kvm_slot_can_be_private(memslot)) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	sev_populate_args.sev_fd = argp->sev_fd;
+	sev_populate_args.type = params.type;
+
+	populate_args.opaque = &sev_populate_args;
+	populate_args.gfn = params.gfn_start;
+	populate_args.src = u64_to_user_ptr(params.uaddr);
+	populate_args.npages = npages;
+	populate_args.do_memcpy = params.type != KVM_SEV_SNP_PAGE_TYPE_ZERO;
+	populate_args.post_populate = sev_gmem_post_populate;
+
+	ret = kvm_gmem_populate(kvm, memslot, &populate_args);
+	if (ret) {
+		argp->error = sev_populate_args.fw_error;
+		pr_debug("%s: kvm_gmem_populate failed, ret %d\n", __func__, ret);
+	}
+
+out:
+	mutex_unlock(&kvm->slots_lock);
+
+	return ret;
+}
+
 int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_sev_cmd sev_cmd;
@@ -2165,6 +2373,9 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
 	case KVM_SEV_SNP_LAUNCH_START:
 		r = snp_launch_start(kvm, &sev_cmd);
 		break;
+	case KVM_SEV_SNP_LAUNCH_UPDATE:
+		r = snp_launch_update(kvm, &sev_cmd);
+		break;
 	default:
 		r = -EINVAL;
 		goto out;
-- 
2.25.1




  parent reply	other threads:[~2024-03-29 23:00 UTC|newest]

Thread overview: 96+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-29 22:58 [PATCH v12 00/29] Add AMD Secure Nested Paging (SEV-SNP) Hypervisor Support Michael Roth
2024-03-29 22:58 ` Michael Roth
2024-03-29 22:58 ` [PATCH v12 01/29] [TEMP] x86/kvm/Kconfig: Have KVM_AMD_SEV select ARCH_HAS_CC_PLATFORM Michael Roth
2024-03-29 22:58 ` [PATCH v12 02/29] [TEMP] x86/cc: Add cc_platform_set/_clear() helpers Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-29 22:58 ` [PATCH v12 03/29] [TEMP] x86/CPU/AMD: Track SNP host status with cc_platform_*() Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-29 22:58 ` [PATCH v12 04/29] [TEMP] fixup! KVM: SEV: sync FPU and AVX state at LAUNCH_UPDATE_VMSA time Michael Roth
2024-03-29 22:58 ` [PATCH v12 05/29] KVM: x86: Define RMP page fault error bits for #NPF Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-30 19:28   ` Paolo Bonzini
2024-03-29 22:58 ` [PATCH v12 06/29] KVM: SEV: Select KVM_GENERIC_PRIVATE_MEM when CONFIG_KVM_AMD_SEV=y Michael Roth
2024-03-29 22:58 ` [PATCH v12 07/29] KVM: SEV: Add support to handle AP reset MSR protocol Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-29 22:58 ` [PATCH v12 08/29] KVM: SEV: Add GHCB handling for Hypervisor Feature Support requests Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-29 22:58 ` [PATCH v12 09/29] KVM: SEV: Add initial SEV-SNP support Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-30 19:58   ` Paolo Bonzini
2024-03-29 22:58 ` [PATCH v12 10/29] KVM: SEV: Add KVM_SEV_SNP_LAUNCH_START command Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-30 20:20   ` Paolo Bonzini
2024-03-29 22:58 ` Michael Roth [this message]
2024-03-29 22:58   ` [PATCH v12 11/29] KVM: SEV: Add KVM_SEV_SNP_LAUNCH_UPDATE command Michael Roth
2024-03-30 20:31   ` Paolo Bonzini
2024-04-01 22:22     ` Michael Roth
2024-04-02 22:58       ` Isaku Yamahata
2024-04-03 12:51         ` Paolo Bonzini
2024-04-03 15:37           ` Isaku Yamahata
2024-04-04 16:03   ` Paolo Bonzini
2024-03-29 22:58 ` [PATCH v12 12/29] KVM: SEV: Add KVM_SEV_SNP_LAUNCH_FINISH command Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-30 20:41   ` Paolo Bonzini
2024-04-01 23:17     ` Michael Roth
2024-04-03 12:56       ` Paolo Bonzini
2024-03-29 22:58 ` [PATCH v12 13/29] KVM: SEV: Add support to handle GHCB GPA register VMGEXIT Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-29 22:58 ` [PATCH v12 14/29] KVM: SEV: Add support to handle MSR based Page State Change VMGEXIT Michael Roth
2024-03-29 22:58 ` [PATCH v12 15/29] KVM: SEV: Add support to handle " Michael Roth
2024-03-29 22:58 ` [PATCH v12 16/29] KVM: x86: Export the kvm_zap_gfn_range() for the SNP use Michael Roth
2024-03-30 20:51   ` Paolo Bonzini
2024-03-29 22:58 ` [PATCH v12 17/29] KVM: SEV: Add support to handle RMP nested page faults Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-30 20:55   ` Paolo Bonzini
2024-03-29 22:58 ` [PATCH v12 18/29] KVM: SEV: Use a VMSA physical address variable for populating VMCB Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-30 21:01   ` Paolo Bonzini
2024-04-16 11:53     ` Paolo Bonzini
2024-04-16 14:25       ` Tom Lendacky
2024-04-16 17:00         ` Paolo Bonzini
2024-04-17 20:57       ` Michael Roth
2024-03-29 22:58 ` [PATCH v12 19/29] KVM: SEV: Support SEV-SNP AP Creation NAE event Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-29 22:58 ` [PATCH v12 20/29] KVM: SEV: Add support for GHCB-based termination requests Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-29 22:58 ` [PATCH v12 21/29] KVM: SEV: Implement gmem hook for initializing private pages Michael Roth
2024-03-30 21:05   ` Paolo Bonzini
2024-03-30 21:05     ` Paolo Bonzini
2024-03-30 21:05     ` Paolo Bonzini
2024-03-29 22:58 ` [PATCH v12 22/29] KVM: SEV: Implement gmem hook for invalidating " Michael Roth
2024-03-30 21:31   ` Paolo Bonzini
2024-03-30 21:31     ` Paolo Bonzini
2024-03-30 21:31     ` Paolo Bonzini
2024-04-18 19:57     ` Michael Roth
2024-03-29 22:58 ` [PATCH v12 23/29] KVM: x86: Implement gmem hook for determining max NPT mapping level Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-30 21:35   ` Paolo Bonzini
2024-03-30 21:35     ` Paolo Bonzini
2024-03-30 21:35     ` Paolo Bonzini
2024-03-29 22:58 ` [PATCH v12 24/29] KVM: SEV: Avoid WBINVD for HVA-based MMU notifications for SNP Michael Roth
2024-03-30 21:35   ` Paolo Bonzini
2024-03-30 21:35     ` Paolo Bonzini
2024-03-29 22:58 ` [PATCH v12 25/29] KVM: SVM: Add module parameter to enable the SEV-SNP Michael Roth
2024-03-30 21:35   ` Paolo Bonzini
2024-03-30 21:35     ` Paolo Bonzini
2024-03-29 22:58 ` [PATCH v12 26/29] KVM: SEV: Provide support for SNP_GUEST_REQUEST NAE event Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-04-10 22:14   ` Tom Lendacky
2024-03-29 22:58 ` [PATCH v12 27/29] crypto: ccp: Add the SNP_VLEK_LOAD command Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-04-10 22:20   ` Tom Lendacky
2024-03-29 22:58 ` [PATCH v12 28/29] crypto: ccp: Add the SNP_{PAUSE,RESUME}_ATTESTATION commands Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-04-10 22:27   ` Tom Lendacky
2024-03-29 22:58 ` [PATCH v12 29/29] KVM: SEV: Provide support for SNP_EXTENDED_GUEST_REQUEST NAE event Michael Roth
2024-03-29 22:58   ` Michael Roth
2024-04-11 13:33   ` Tom Lendacky
2024-03-30 21:44 ` [PATCH v12 00/29] Add AMD Secure Nested Paging (SEV-SNP) Hypervisor Support Paolo Bonzini
2024-03-30 21:44   ` Paolo Bonzini
2024-03-30 21:44   ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240329225835.400662-12-michael.roth@amd.com \
    --to=michael.roth@amd.com \
    --cc=ak@linux.intel.com \
    --cc=alpergun@google.com \
    --cc=ardb@kernel.org \
    --cc=ashish.kalra@amd.com \
    --cc=bp@alien8.de \
    --cc=brijesh.singh@amd.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=dovmurik@linux.ibm.com \
    --cc=hpa@zytor.com \
    --cc=jarkko@kernel.org \
    --cc=jmattson@google.com \
    --cc=jroedel@suse.de \
    --cc=kirill@shutemov.name \
    --cc=kvm@vger.kernel.org \
    --cc=liam.merwick@oracle.com \
    --cc=linux-coco@lists.linux.dev \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=nikunj.dadhania@amd.com \
    --cc=pankaj.gupta@amd.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pgonda@google.com \
    --cc=rientjes@google.com \
    --cc=sathyanarayanan.kuppuswamy@linux.intel.com \
    --cc=seanjc@google.com \
    --cc=slp@redhat.com \
    --cc=srinivas.pandruvada@linux.intel.com \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=tobin@ibm.com \
    --cc=tony.luck@intel.com \
    --cc=vbabka@suse.cz \
    --cc=vkuznets@redhat.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.