From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEF47C25B5F for ; Wed, 8 May 2024 09:10:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1AAC66B009C; Wed, 8 May 2024 05:10:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 15B226B00A0; Wed, 8 May 2024 05:10:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0499B6B00A6; Wed, 8 May 2024 05:10:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id DC14B6B009C for ; Wed, 8 May 2024 05:10:51 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 8F0F9A141B for ; Wed, 8 May 2024 09:10:51 +0000 (UTC) X-FDA: 82094658702.13.34FC1A7 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf09.hostedemail.com (Postfix) with ESMTP id C721C14000A for ; Wed, 8 May 2024 09:10:48 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715159448; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lcdwAlzNVeiUUlpM+RX9boFtpJ+0Xw5IgJiZdRfjXA4=; b=qcnFv5ycymx6PbrjbyDwUyQvSIglgHCCB/AoOP/7WxwrsdJAVlcuMd0RK/HtRGTwdXX9qG 39yAntL1zzj9KcPUvukO2GEnlZ18kPS3IA27TvwNvT+E5LBQaEetyg2NX5QwEJ2iecmFMh FXPvb0ljqEID/srn5Cl/TA4iRW8YdRg= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715159448; a=rsa-sha256; cv=none; b=BKqbc8sjlH4OT7GmgTlGTo5+fIcU2Iu3ogyZ+CjbB0fZ1YPS//1g2zPY5MKm9hcmcU9YS2 JwdXlxaJXWFcql96l9pC9TCklDJotDL0uBzNAd5d4ES/PTuMg98X9CmLYywLDop7qCQf4W 9NT7mwRQPtcID/VFARsAcxpm23VxMuI= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A9F111063; Wed, 8 May 2024 02:11:13 -0700 (PDT) Received: from [10.57.67.194] (unknown [10.57.67.194]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2D6DE3F6A8; Wed, 8 May 2024 02:10:45 -0700 (PDT) Message-ID: Date: Wed, 8 May 2024 10:10:43 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 2/6] mm: remove swap_free() and always use swap_free_nr() Content-Language: en-GB To: Barry Song <21cnbao@gmail.com>, "Huang, Ying" , Christoph Hellwig , chrisl@kernel.org Cc: akpm@linux-foundation.org, linux-mm@kvack.org, baolin.wang@linux.alibaba.com, david@redhat.com, hanchuanhua@oppo.com, hannes@cmpxchg.org, hughd@google.com, kasong@tencent.com, linux-kernel@vger.kernel.org, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, xiang@kernel.org, yosryahmed@google.com, yuzhao@google.com, ziy@nvidia.com, "Rafael J. Wysocki" , Pavel Machek , Len Brown References: <20240503005023.174597-1-21cnbao@gmail.com> <20240503005023.174597-3-21cnbao@gmail.com> <87y18kivny.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Queue-Id: C721C14000A X-Rspamd-Server: rspam06 X-Stat-Signature: 7o3h17krmpdqx5wiyfja9xqcoyn5wkjf X-HE-Tag: 1715159448-572714 X-HE-Meta: U2FsdGVkX1/r+zTbqRy8CahfdqtvWGL9PNGs9Lbpo7IudltE4vYKH+2aMiyaKrRDn3GGc/pMhoh8v8JGbn1jiu5zQLCs1pB9NdPGy5EbM5WnejW9/v4R3nbrymNVXp0kN4Au76TEAablmfjm0SZsAiJgOfswKZEoP+Kwwrb7EPcJneWh22Yq0wRwFSHSc94ZUWzeQ4gcIeP7mc6K8kwaSQxd4xEMjaKnlbV+COK+fjM5F8+iWuRNSk0gY9a2dhnl5g6Z4cHeMIYlAsONP90XMk+tZDYrZ8RrDk3Ptxph/ewtQEIISDBLZv/yNLKa2r9ADE2skvTttkkfvN9yQzC3N7Bk1fpUvHsyDNUF4y/K4Mket/7vuGrOSPoxmjyIJ9Fna/HAJSo2dzqxDrD5/uJwybvFPkysmldWBxH9d2p67vYz3EyeKRJAdExPXc/5aEyd+fx8wKADnJMFnGLbfUrdCmogWn6/35ycyM7rWpxFM/xqjLCnHggege24MFeKK4iPX/Et7oBFeQGcoYFPU0mfFn2w3sqWICH/ugpPUMePNuzEvfAnZBMxhEzDiTff5og3CyxL99ZyEWTEwY+pgeAVZ+w1jvoialFFffSVpvw/D1vViVLIvgXvZXyL9UXob39UW4+p5fvXlxT+LmZHEDIm3rXDjRJGewe5QvTC7Mt4cXgWcy3FDDYUbnwMOFeos/uJcuF/VvDUWWj1Aw6hhRdwv0jiTgjzDyBKXmHGDGcoTTDxqsqwFKc6tKlFySrz06rc/8xwVZLtWjWxRgw/foqieLVaHIUhuTs6O6I0gXkm6KEDYvBJ7rAZtYqJOUU+TJ0SrheqEQP5IFig4EaJZwZSJoCE9Jole7wpYws2rYUnxx7tSefSJyu7waRO/khNQzG83TGwwgP/VBeoiRSnPoV1LQQSTa5QfIXVFM2h2iWOzPCUpdpfLcM27b9HBVoiCsrJKncLOs3VyZR5N8XUr9+ HMvZ2/yS ynD8+4jMMLT0onVhDH10TEM+5cT58rmDvAaQmjGA6HiJ1yYdi22IAyFLd9in52rxrQPG2Isz+abOJ4xU6Z1zx4U2Fvf9kAMh54AWJPpOrX3T14483p/3scqutpF++Bi0diHDj+ALfAp7UKmN4L0IasixrRfNklWP1beFQNwbfGbqG5DxWsA4AbJ6/UaQD9JFilCTN5/grTMrB8ZGwR1DDx83ZeB6t/6vk9GIOKp56+UdWtC3/wQ2ywlthcDneO3TKXpy8HreQWFg3hXMsUPkqRxjMDJsOhUZRkm/sSQM340CQcB1hXfTeT6okXE1udZsfLF4EqNeT8rxytKpGz4XxX6qb5LaXkybd2WKo3qXrFD0+1823DdOsVH/wOz+NCEyFPNyqkrkvZB7lXV89p5c9+Hgm1kzfdIkghvReg7VpKTzjM7A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 08/05/2024 09:30, Barry Song wrote: > On Wed, May 8, 2024 at 7:58 PM Huang, Ying wrote: >> >> Ryan Roberts writes: >> >>> On 03/05/2024 01:50, Barry Song wrote: >>>> From: Barry Song >>>> >>>> To streamline maintenance efforts, we propose discontinuing the use of >>>> swap_free(). Instead, we can simply invoke swap_free_nr() with nr set >>>> to 1. This adjustment offers the advantage of enabling batch processing >>>> within kernel/power/swap.c. Furthermore, swap_free_nr() is designed with >>>> a bitmap consisting of only one long, resulting in overhead that can be >>>> ignored for cases where nr equals 1. >>>> >>>> Suggested-by: "Huang, Ying" >>>> Signed-off-by: Barry Song >>>> Cc: "Rafael J. Wysocki" >>>> Cc: Pavel Machek >>>> Cc: Len Brown >>>> Cc: Hugh Dickins >>>> --- >>>> include/linux/swap.h | 5 ----- >>>> kernel/power/swap.c | 7 +++---- >>>> mm/memory.c | 2 +- >>>> mm/rmap.c | 4 ++-- >>>> mm/shmem.c | 4 ++-- >>>> mm/swapfile.c | 19 +++++-------------- >>>> 6 files changed, 13 insertions(+), 28 deletions(-) >>>> >>>> diff --git a/include/linux/swap.h b/include/linux/swap.h >>>> index d1d35e92d7e9..f03cb446124e 100644 >>>> --- a/include/linux/swap.h >>>> +++ b/include/linux/swap.h >>>> @@ -482,7 +482,6 @@ extern int add_swap_count_continuation(swp_entry_t, gfp_t); >>>> extern void swap_shmem_alloc(swp_entry_t); >>>> extern int swap_duplicate(swp_entry_t); >>>> extern int swapcache_prepare(swp_entry_t); >>>> -extern void swap_free(swp_entry_t); >>> >>> I wonder if it would be cleaner to: >>> >>> #define swap_free(entry) swap_free_nr((entry), 1) >>> >>> To save all the churn for the callsites that just want to pass a single entry? >> >> I prefer this way. Although I prefer inline functions. Yes, I agree inline function is the better approach. > > Yes, using static inline is preferable. I've recently submitted > a checkpatch/codestyle for this, which can be found at: > https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git/commit/?h=mm-everything&id=39c58d5ed036 > https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git/commit/?h=mm-everything&id=8379bf0b0e1f5 > > Using static inline aligns with the established rule. > >> >> Otherwise, LGTM. Feel free to add >> >> Reviewed-by: "Huang, Ying" > > Thanks! > >> >> in the future version. > > I believe Christoph's vote leans towards simply removing swap_free_nr > and renaming it to swap_free, while adding a new parameter as follows. > > void swap_free(swp_entry_t entry, int nr); > { > } > > now I see Ryan and you prefer > > static inline swap_free() > { > swap_free_nr(...., 1) > } > > Chris slightly favors discouraging the use of swap_free() without the > new parameter. Removing swap_free() can address this concern. > > It seems that maintaining swap_free() and having it call swap_free_nr() with > a default value of 1 received the most support. > > To align with free_swap_and_cache() and free_swap_and_cache_nr(), > I'll proceed with the "static inline" approach in the new version. Please > voice any objections you may have, Christoph, Chris. I'm happy with either route. If you end up adding a nr param to swap_free() then it would also be good to give free_swap_and_cache_nr() the same treatment. > >> >>>> extern void swap_free_nr(swp_entry_t entry, int nr_pages); >>>> extern void swapcache_free_entries(swp_entry_t *entries, int n); >>>> extern void free_swap_and_cache_nr(swp_entry_t entry, int nr); >>>> @@ -561,10 +560,6 @@ static inline int swapcache_prepare(swp_entry_t swp) >>>> return 0; >>>> } >>>> >>>> -static inline void swap_free(swp_entry_t swp) >>>> -{ >>>> -} >>>> - >>>> static inline void swap_free_nr(swp_entry_t entry, int nr_pages) >>>> { >>>> } >>>> diff --git a/kernel/power/swap.c b/kernel/power/swap.c >>>> index 5bc04bfe2db1..6befaa88a342 100644 >>>> --- a/kernel/power/swap.c >>>> +++ b/kernel/power/swap.c >>>> @@ -181,7 +181,7 @@ sector_t alloc_swapdev_block(int swap) >>>> offset = swp_offset(get_swap_page_of_type(swap)); >>>> if (offset) { >>>> if (swsusp_extents_insert(offset)) >>>> - swap_free(swp_entry(swap, offset)); >>>> + swap_free_nr(swp_entry(swap, offset), 1); >>>> else >>>> return swapdev_block(swap, offset); >>>> } >>>> @@ -200,12 +200,11 @@ void free_all_swap_pages(int swap) >>>> >>>> while ((node = swsusp_extents.rb_node)) { >>>> struct swsusp_extent *ext; >>>> - unsigned long offset; >>>> >>>> ext = rb_entry(node, struct swsusp_extent, node); >>>> rb_erase(node, &swsusp_extents); >>>> - for (offset = ext->start; offset <= ext->end; offset++) >>>> - swap_free(swp_entry(swap, offset)); >>>> + swap_free_nr(swp_entry(swap, ext->start), >>>> + ext->end - ext->start + 1); >>>> >>>> kfree(ext); >>>> } >>>> diff --git a/mm/memory.c b/mm/memory.c >>>> index eea6e4984eae..f033eb3528ba 100644 >>>> --- a/mm/memory.c >>>> +++ b/mm/memory.c >>>> @@ -4225,7 +4225,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>>> * We're already holding a reference on the page but haven't mapped it >>>> * yet. >>>> */ >>>> - swap_free(entry); >>>> + swap_free_nr(entry, 1); >>>> if (should_try_to_free_swap(folio, vma, vmf->flags)) >>>> folio_free_swap(folio); >>>> >>>> diff --git a/mm/rmap.c b/mm/rmap.c >>>> index 087a79f1f611..39ec7742acec 100644 >>>> --- a/mm/rmap.c >>>> +++ b/mm/rmap.c >>>> @@ -1865,7 +1865,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, >>>> goto walk_done_err; >>>> } >>>> if (arch_unmap_one(mm, vma, address, pteval) < 0) { >>>> - swap_free(entry); >>>> + swap_free_nr(entry, 1); >>>> set_pte_at(mm, address, pvmw.pte, pteval); >>>> goto walk_done_err; >>>> } >>>> @@ -1873,7 +1873,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, >>>> /* See folio_try_share_anon_rmap(): clear PTE first. */ >>>> if (anon_exclusive && >>>> folio_try_share_anon_rmap_pte(folio, subpage)) { >>>> - swap_free(entry); >>>> + swap_free_nr(entry, 1); >>>> set_pte_at(mm, address, pvmw.pte, pteval); >>>> goto walk_done_err; >>>> } >>>> diff --git a/mm/shmem.c b/mm/shmem.c >>>> index fa2a0ed97507..bfc8a2beb24f 100644 >>>> --- a/mm/shmem.c >>>> +++ b/mm/shmem.c >>>> @@ -1836,7 +1836,7 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index, >>>> * in shmem_evict_inode(). >>>> */ >>>> shmem_recalc_inode(inode, -1, -1); >>>> - swap_free(swap); >>>> + swap_free_nr(swap, 1); >>>> } >>>> >>>> /* >>>> @@ -1927,7 +1927,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, >>>> >>>> delete_from_swap_cache(folio); >>>> folio_mark_dirty(folio); >>>> - swap_free(swap); >>>> + swap_free_nr(swap, 1); >>>> put_swap_device(si); >>>> >>>> *foliop = folio; >>>> diff --git a/mm/swapfile.c b/mm/swapfile.c >>>> index ec12f2b9d229..ddcd0f24b9a1 100644 >>>> --- a/mm/swapfile.c >>>> +++ b/mm/swapfile.c >>>> @@ -1343,19 +1343,6 @@ static void swap_entry_free(struct swap_info_struct *p, swp_entry_t entry) >>>> swap_range_free(p, offset, 1); >>>> } >>>> >>>> -/* >>>> - * Caller has made sure that the swap device corresponding to entry >>>> - * is still around or has not been recycled. >>>> - */ >>>> -void swap_free(swp_entry_t entry) >>>> -{ >>>> - struct swap_info_struct *p; >>>> - >>>> - p = _swap_info_get(entry); >>>> - if (p) >>>> - __swap_entry_free(p, entry); >>>> -} >>>> - >>>> static void cluster_swap_free_nr(struct swap_info_struct *sis, >>>> unsigned long offset, int nr_pages) >>>> { >>>> @@ -1385,6 +1372,10 @@ static void cluster_swap_free_nr(struct swap_info_struct *sis, >>>> unlock_cluster_or_swap_info(sis, ci); >>>> } >>>> >>>> +/* >>>> + * Caller has made sure that the swap device corresponding to entry >>>> + * is still around or has not been recycled. >>>> + */ >>>> void swap_free_nr(swp_entry_t entry, int nr_pages) >>>> { >>>> int nr; >>>> @@ -1930,7 +1921,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, >>>> new_pte = pte_mkuffd_wp(new_pte); >>>> setpte: >>>> set_pte_at(vma->vm_mm, addr, pte, new_pte); >>>> - swap_free(entry); >>>> + swap_free_nr(entry, 1); >>>> out: >>>> if (pte) >>>> pte_unmap_unlock(pte, ptl); >> >> -- >> Best Regards, >> Huang, Ying > > Thanks > Barry