From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C6BCC4345F for ; Thu, 25 Apr 2024 04:01:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0DD8E6B0087; Thu, 25 Apr 2024 00:01:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 08D886B0088; Thu, 25 Apr 2024 00:01:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EBE266B0089; Thu, 25 Apr 2024 00:01:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D054A6B0087 for ; Thu, 25 Apr 2024 00:01:01 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 51FF7809E9 for ; Thu, 25 Apr 2024 04:01:01 +0000 (UTC) X-FDA: 82046703522.02.70471EB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 9F8312000A for ; Thu, 25 Apr 2024 04:00:59 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ab89jnFd; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1714017659; a=rsa-sha256; cv=none; b=vVp2j7iEeG0PMbImkNDy+lddugbd1mRAbp0eJ2Bu1z5pZYrResDA1kSVD3dxhefgfI6U8n c4a10JifVjmaPO2pVhRPJydvWkGyMh/ibiXqmZtzBbK3GZU/XUUXwdgdp9wQ18vuZB2Pdk W81rrOxDQkQot0jfdvNOaSFDukHXMyQ= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ab89jnFd; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1714017659; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ii5DQqwSSpzgwJog4pdMABsSBFx33NTWU5XJvtJ1QfM=; b=kT8wgOMuDgOwdREKdEX+nv9ZcVNOcGf1eT0AHl/hzgRB09aXN6Y+fRElQ2vaF/q9K1Sm5b YdT68YgvxyCx6+9N+lZvhZtWsy3VNctgMAnUvsLpboSeVvDifRPZCMdhZYlWcZdxfuQkbk EA+fhQk3n470fcGNKWX5gSZSWPPu/5s= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Ii5DQqwSSpzgwJog4pdMABsSBFx33NTWU5XJvtJ1QfM=; b=ab89jnFd/UCaBIXwx4yaj3bI3M CeEamtNtqnLF2urlOHTbJlhlhWRfruP7joBolh3jNSolGsWLWpl0L1Q2CJG6qbe+xXtpsiIYPf1lk XlBpEQoaOkaBVClQJL+SJeGkNvJkqgloRSO68302CdMMPUmB7He09rUzODKRCDxcr9nm/uFZZa1pd 2r9als8mI0+xHGLeLy5j88IrJLXJ3FYPPKp+L/mx/RDcniOSPnPUPBpy0RMK9rC0tsUYmNfJg1dWB twnVv0m05KsbuyaGsqRYBz0CHOsI0qm90/dbFQizqK9UDQXdUNM8RETGEVFt1sPcZxpcu044aggmT PwVEmEWw==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rzqI3-00000002Gjt-3tAf; Thu, 25 Apr 2024 04:00:56 +0000 Date: Thu, 25 Apr 2024 05:00:55 +0100 From: Matthew Wilcox To: Kefeng Wang Cc: Andrew Morton , Ryan Roberts , linux-mm@kvack.org, David Hildenbrand Subject: Re: [PATCH v2] mm: add more readable thp_vma_allowable_order_foo() Message-ID: References: <20240425035108.3063-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240425035108.3063-1-wangkefeng.wang@huawei.com> X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 9F8312000A X-Stat-Signature: xso9bprgn83r9wemf7agutnsdmw63qyp X-Rspam-User: X-HE-Tag: 1714017659-723920 X-HE-Meta: U2FsdGVkX19OcrCrzFl2T0pq72BtfPL5R31Gg93/9oHHcKKiuKXj2ni2+Q4gTIT/zdx5iUIqkpgS8vVCEDQdMpww0Uhe7ZKmD355n3MMetobmFyuXN836jsmd2DL5w3NcZUxApzqaDz60mltMDH02vpHxwQekjYfCNzOAvHS9HbrjOX1pomMyYV+a23ASFELDdeXr+blZMeuAflgMt84sX+0yGcOCaeb+tsfWOu9uxJKG9GXZ7kJatKXpCjb8u0jHMwjfCowyvuiK5y0cK11yV0EKBEliWtj9NMfg3JBkr6Gg4R8cWoJo89e5Nc1Y3k3BXlTaF94NcZVz3YPxrLhkcddA5wYueNVcYdkwBw4eOoO/4Hqj8OGCSNb34s0CwYoCphIbHPUE1THxBv711zRyDOcx320PftyyFM6xWpS/y67kzY5/nXyvy5aJYkV2PuUxnDsvHRzRXHGrwDt9pB3D4qcwlqfn9426hqdeG9TI5g0SmPvY2G7q+RTapAAaeSYx3+P4co8Dfsp7E3FVTyNiklVjkvPRI2aaxpNadiBsD/gwC1j0Jq1Klpp/mAMR74HHhp66R8hbGNy6QgY8Srx6HvtrKtRmSUZbbzQXr2LdqB50oZJuT3XvUDcP5mSxqbSYW7kYojXMnPe2S+aw0nnMDHr4JiTHfDHDPjvGrJdIJQhgfxCyJp97oXwZtotS9Y2WtIFPyXPv1Us9kmU73hvQ03B4vEt8iDrZbM1bnuSJJbED4WWvZiv8AkOUivKUe3FZADGO8WlzFRKCXZSEta/D3UjFgszn/AenLb+Ob+FK2JICbTrp9c53h1Trg5HUqKrIeQrMkbw+ehR0gYM4gfot7psDflZXDmU2rTJdAYhokiWbbdzkf+COyBZWejv+g7J8Xk38NP7PV7Z4+1QIyD6tQfJpQkqsctwz0R/lHBbqi0Lw+eOMXuPICB6Pr0Fr/6R36yfK02TBzoMoSRlNEV S3Vtn2qs Twkqux1rj5XCc9DxkNn1Pg4c7x6T3aAlWhwlYq8pdgJ5WngFSJW4rRFCGFBSw/yeqDsWC4M9MkI19u6MzvXj8m21oh3IvBkQGdmAEg8+Qo1WVln1RhcVNGa8a1ALQ5+80UhqR1LrMonx1HVPliKXZ7AF15Zyu1sJYqkg2ijnb6dMz4nsMo400L5ePCmpUkWhmQ9df/9nl2ru+lbWxvE87fnh1ITYqI0g8pRk3QpRO+GOdKkQ84UyivNzPow== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Apr 25, 2024 at 11:51:08AM +0800, Kefeng Wang wrote: > There are too many bool arguments in thp_vma_allowable_orders(), adding > some more readable thp_vma_allowable_order_foo(), Here's an alternative approach I came up with and forgot to send out. I take no position on which is better. commit a761d4b9cf14 Author: Matthew Wilcox (Oracle) Date: Tue Apr 16 00:25:09 2024 -0400 mm: Simplify thp_vma_allowable_order Combine the three boolean arguments into one flags argument for readability. Signed-off-by: Matthew Wilcox (Oracle) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 23fbab954c20..0ffa8902f973 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -866,8 +866,8 @@ static int show_smap(struct seq_file *m, void *v) __show_smap(m, &mss, false); seq_printf(m, "THPeligible: %8u\n", - !!thp_vma_allowable_orders(vma, vma->vm_flags, true, false, - true, THP_ORDERS_ALL)); + !!thp_vma_allowable_orders(vma, vma->vm_flags, + TVA_SMAPS | TVA_ENFORCE_SYSFS, THP_ORDERS_ALL)); if (arch_pkeys_enabled()) seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index de0c89105076..0d0ba39b86ae 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -84,8 +84,12 @@ extern struct kobj_attribute shmem_enabled_attr; */ #define THP_ORDERS_ALL (THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE) -#define thp_vma_allowable_order(vma, vm_flags, smaps, in_pf, enforce_sysfs, order) \ - (!!thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf, enforce_sysfs, BIT(order))) +#define TVA_SMAPS (1 << 0) /* Will be used for procfs */ +#define TVA_IN_PF (1 << 1) /* Page fault handler */ +#define TVA_ENFORCE_SYSFS (1 << 2) /* Obey sysfs configuration */ + +#define thp_vma_allowable_order(vma, vm_flags, tva_flags, order) \ + (!!thp_vma_allowable_orders(vma, vm_flags, tva_flags, BIT(order))) #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define HPAGE_PMD_SHIFT PMD_SHIFT @@ -210,17 +214,15 @@ static inline bool file_thp_enabled(struct vm_area_struct *vma) } unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, - unsigned long vm_flags, bool smaps, - bool in_pf, bool enforce_sysfs, + unsigned long vm_flags, + unsigned long tva_flags, unsigned long orders); /** * thp_vma_allowable_orders - determine hugepage orders that are allowed for vma * @vma: the vm area to check * @vm_flags: use these vm_flags instead of vma->vm_flags - * @smaps: whether answer will be used for smaps file - * @in_pf: whether answer will be used by page fault handler - * @enforce_sysfs: whether sysfs config should be taken into account + * @tva_flags: Which TVA flags to honour * @orders: bitfield of all orders to consider * * Calculates the intersection of the requested hugepage orders and the allowed @@ -233,12 +235,12 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, */ static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, - unsigned long vm_flags, bool smaps, - bool in_pf, bool enforce_sysfs, + unsigned long vm_flags, + unsigned long tva_flags, unsigned long orders) { /* Optimization to check if required orders are enabled early. */ - if (enforce_sysfs && vma_is_anonymous(vma)) { + if ((tva_flags & TVA_ENFORCE_SYSFS) && vma_is_anonymous(vma)) { unsigned long mask = READ_ONCE(huge_anon_orders_always); if (vm_flags & VM_HUGEPAGE) @@ -252,8 +254,7 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, return 0; } - return __thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf, - enforce_sysfs, orders); + return __thp_vma_allowable_orders(vma, vm_flags, tva_flags, orders); } #define transparent_hugepage_use_zero_page() \ @@ -404,8 +405,8 @@ static inline unsigned long thp_vma_suitable_orders(struct vm_area_struct *vma, } static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, - unsigned long vm_flags, bool smaps, - bool in_pf, bool enforce_sysfs, + unsigned long vm_flags, + unsigned long tva_flags, unsigned long orders) { return 0; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8bc4ffd4725e..5d3d9c0c4153 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -80,10 +80,13 @@ unsigned long huge_anon_orders_madvise __read_mostly; unsigned long huge_anon_orders_inherit __read_mostly; unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, - unsigned long vm_flags, bool smaps, - bool in_pf, bool enforce_sysfs, + unsigned long vm_flags, + unsigned long tva_flags, unsigned long orders) { + bool smaps = tva_flags & TVA_SMAPS; + bool in_pf = tva_flags & TVA_IN_PF; + bool enforce_sysfs = tva_flags & TVA_ENFORCE_SYSFS; /* Check the intersection of requested and supported orders. */ orders &= vma_is_anonymous(vma) ? THP_ORDERS_ALL_ANON : THP_ORDERS_ALL_FILE; diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 38830174608f..9642d3c6ee7e 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -453,7 +453,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, { if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && hugepage_flags_enabled()) { - if (thp_vma_allowable_order(vma, vm_flags, false, false, true, + if (thp_vma_allowable_order(vma, vm_flags, TVA_ENFORCE_SYSFS, PMD_ORDER)) __khugepaged_enter(vma->vm_mm); } @@ -917,6 +917,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, struct collapse_control *cc) { struct vm_area_struct *vma; + unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0; if (unlikely(hpage_collapse_test_exit_or_disable(mm))) return SCAN_ANY_PROCESS; @@ -927,8 +928,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) return SCAN_ADDRESS_RANGE; - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, - cc->is_khugepaged, PMD_ORDER)) + if (!thp_vma_allowable_order(vma, vma->vm_flags, tva_flags, PMD_ORDER)) return SCAN_VMA_CHECK; /* * Anon VMA expected, the address may be unmapped then @@ -1510,8 +1510,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, * and map it by a PMD, regardless of sysfs THP settings. As such, let's * analogously elide sysfs THP settings here. */ - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, false, - PMD_ORDER)) + if (!thp_vma_allowable_order(vma, vma->vm_flags, 0, PMD_ORDER)) return SCAN_VMA_CHECK; /* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */ @@ -2376,8 +2375,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, progress++; break; } - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, - true, PMD_ORDER)) { + if (!thp_vma_allowable_order(vma, vma->vm_flags, + TVA_ENFORCE_SYSFS, PMD_ORDER)) { skip: progress++; continue; @@ -2714,8 +2713,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, *prev = vma; - if (!thp_vma_allowable_order(vma, vma->vm_flags, false, false, false, - PMD_ORDER)) + if (!thp_vma_allowable_order(vma, vma->vm_flags, 0, PMD_ORDER)) return -EINVAL; cc = kmalloc(sizeof(*cc), GFP_KERNEL); diff --git a/mm/memory.c b/mm/memory.c index 5624b881b662..287f7d6eb9ed 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4346,8 +4346,8 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) * for this vma. Then filter out the orders that can't be allocated over * the faulting address and still be fully contained in the vma. */ - orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true, - BIT(PMD_ORDER) - 1); + orders = thp_vma_allowable_orders(vma, vma->vm_flags, + TVA_IN_PF | TVA_ENFORCE_SYSFS, BIT(PMD_ORDER) - 1); orders = thp_vma_suitable_orders(vma, vmf->address, orders); if (!orders) @@ -5395,7 +5395,8 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, return VM_FAULT_OOM; retry_pud: if (pud_none(*vmf.pud) && - thp_vma_allowable_order(vma, vm_flags, false, true, true, PUD_ORDER)) { + thp_vma_allowable_order(vma, vm_flags, + TVA_IN_PF | TVA_ENFORCE_SYSFS, PUD_ORDER)) { ret = create_huge_pud(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; @@ -5429,7 +5430,8 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, goto retry_pud; if (pmd_none(*vmf.pmd) && - thp_vma_allowable_order(vma, vm_flags, false, true, true, PMD_ORDER)) { + thp_vma_allowable_order(vma, vm_flags, + TVA_IN_PF | TVA_ENFORCE_SYSFS, PMD_ORDER)) { ret = create_huge_pmd(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret;