From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAEC1C6FA83 for ; Mon, 26 Sep 2022 13:57:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233552AbiIZN5M (ORCPT ); Mon, 26 Sep 2022 09:57:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235081AbiIZN4n (ORCPT ); Mon, 26 Sep 2022 09:56:43 -0400 Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF0DB151DDE; Mon, 26 Sep 2022 05:11:33 -0700 (PDT) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 777CF5C0136; Mon, 26 Sep 2022 08:10:32 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Mon, 26 Sep 2022 08:10:32 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov.name; h=cc:cc:content-type:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm1; t=1664194232; x=1664280632; bh=SX ZXZRhuf9JevySNewTG6qOkVzPewIE+tsCRVShJcVU=; b=CeNTx2AJFtWG1toroK mxZOBthTTsNUsUp2GTEyj5bTHXS0tByAQrhP3pF08Vd1O+llD7AmDcc4pTBta4F+ AnobgsRcte3ZQ+eWpfDNdcDi8GVxK7oXGq27whyXTEqebaahCuXjCd/uTbD5gunX TqXJWOjG7BpLcSBS/rtqXGuLiSG01jZs8IjTLHXLOAClFaFjxWW53uI4MRl7PzOs OuUq2nj2OyErm9YyLtjBQ9zlKLhPxzIFm2SxzN9RR6d0ixOWorbffBOpNaBrRLX4 OnT+Mbl0hZRuLob7Jpsn14DI1HmkQBin750+NondDQsz6KB3/YiD4Nda3ybVVyqC 2IXA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-type:date:date:feedback-id :feedback-id:from:from:in-reply-to:in-reply-to:message-id :mime-version:references:reply-to:sender:subject:subject:to:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; t=1664194232; x=1664280632; bh=SXZXZRhuf9JevySNewTG6qOkVzPe wIE+tsCRVShJcVU=; b=HCUsALZL5QHHesHRLIK/3ubAYtM1sEqjZtwxqMiz9WYY BBvC8gCTPor24gDICjNj29er09W8ivNCoZVqhrsIBDeH1ARPY2SCta25rIpThdoS 3ccBHFvHFMdAml/WfZHaonq3gWjM6Qgj39BTrnWpr/H+v1YXNqFP6zEh53Iq9J9B z/4ablLBlW5BKuiIX0RS8Zt8gf2jIzEh3Yu727sBswCtlcG4IXL7OoOelCnyytPr AeguqSA0H8W2I7qVNIaUmjDEWMsUFNkSigFniJnH/7KgA/Bl/07ZC3/g/yffSCbr NGwSb/GhRxHNE+7CpezJPqFcW/0aaC6ALW0ytlLukg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrfeegvddggeelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhepfffhvfevuffkfhggtggujgesthdttddttddtvdenucfhrhhomhepfdfmihhr ihhllhcutedrucfuhhhuthgvmhhovhdfuceokhhirhhilhhlsehshhhuthgvmhhovhdrnh grmhgvqeenucggtffrrghtthgvrhhnpefhieeghfdtfeehtdeftdehgfehuddtvdeuheet tddtheejueekjeegueeivdektdenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpehkihhrihhllhesshhhuhhtvghmohhvrdhnrghmvg X-ME-Proxy: Feedback-ID: ie3994620:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 26 Sep 2022 08:10:31 -0400 (EDT) Received: by box.shutemov.name (Postfix, from userid 1000) id DE00E104928; Mon, 26 Sep 2022 15:10:27 +0300 (+03) Date: Mon, 26 Sep 2022 15:10:27 +0300 From: "Kirill A. Shutemov" To: "Kirill A. Shutemov" Cc: Tom Lendacky , Dionna Amalie Glaze , Dave Hansen , Mel Gorman , Vlastimil Babka , Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel , Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Mike Rapoport , David Hildenbrand , Marcelo Cerri , tim.gardner@canonical.com, Khalid ElMously , philip.cox@canonical.com, the arch/x86 maintainers , Linux Memory Management List , linux-coco@lists.linux.dev, linux-efi , LKML , Mike Rapoport Subject: Re: [PATCHv7 02/14] mm: Add support for unaccepted memory Message-ID: <20220926121027.xc2cgzuiafcssmea@box.shutemov.name> References: <2981e25e-9cda-518a-9750-b8694f2356b5@amd.com> <984e07ed-914f-93ca-a141-3fc8677878e0@intel.com> <20220924010302.bwas4zbro37rrxai@box.shutemov.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220924010302.bwas4zbro37rrxai@box.shutemov.name> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Sep 24, 2022 at 04:03:02AM +0300, Kirill A. Shutemov wrote: > On Thu, Sep 22, 2022 at 09:31:12AM -0500, Tom Lendacky wrote: > > On 9/8/22 14:28, Mike Rapoport wrote: > > > On Thu, Sep 08, 2022 at 09:23:07AM -0700, Dionna Amalie Glaze wrote: > > > > > > > > > > Looks like the first access to the memory map fails, although I think > > > > > it's not in INIT_LIST_HEAD() but rather in init_page_count(). > > > > > > > > > > I'd start with making sure that page_alloc::memmap_alloc() actually returns > > > > > accepted memory. If you build kernel with CONFIG_DEBUG_VM=y the memory map > > > > > will poisoned in this function, so my guess is it'd crash there. > > > > > > > > > > > > > That's a wonderful hint, thank you! I did not run this test > > > > CONFIG_DEBUG_VM set, but you think it's possible it could still be > > > > here? > > > > > > It depends on how you configured your kernel. Say, defconfig does not set > > > it. > > > > > > > I also hit the issue at 256GB. My config is using CONFIG_SPARSEMEM_VMEMMAP > > and fails in memmap_init_range() when attempting to add the first PFN. It > > looks like the underlying page that is backing the vmemmap has not been > > accepted (I receive a #VC 0x404 => page not validated). > > > > Kirill, is this a path that you've looked at? It would appear that somewhere > > in the vmemmap_populate_hugepages() path, some memory acceptance needs to be > > done for the pages that are used to back vmemmap. I'm not very familiar with > > this code, so I'm not sure why everything works for a guest with 255GB of > > memory, but then fails for a guest with 256GB of memory. > > Hm. I don't have machine that large at hands at the moment. And I have not > looked at the codepath before. > > I will try to look into the issue. I'm not able to trigger the bug. With help of vm.overcommit_memory=1, I was managed boot TDX guest to shell with 256G and 1T of guest memory just fine. Any chance it is SEV-SNP specific? Or maybe there some difference in kernel config? Could you share yours? -- Kiryl Shutsemau / Kirill A. Shutemov