From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27EDFC433FE for ; Wed, 8 Sep 2021 02:53:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D1B1F61104 for ; Wed, 8 Sep 2021 02:53:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D1B1F61104 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 774DA94000F; Tue, 7 Sep 2021 22:53:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 72391940008; Tue, 7 Sep 2021 22:53:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6395794000F; Tue, 7 Sep 2021 22:53:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id 55BF6940008 for ; Tue, 7 Sep 2021 22:53:27 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1034818121F9C for ; Wed, 8 Sep 2021 02:53:27 +0000 (UTC) X-FDA: 78562885254.03.83E35E2 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf10.hostedemail.com (Postfix) with ESMTP id BAE7E6001983 for ; Wed, 8 Sep 2021 02:53:26 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 85857610C9; Wed, 8 Sep 2021 02:53:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1631069606; bh=G/z5bEvPuCO7hnfAhsdV0j33MBX5Ma9gmixAcbwIzF8=; h=Date:From:To:Subject:In-Reply-To:From; b=H/7PRxb6bKViNiUFhlEpYycFB3EkkYoTLS7U+VC5+5IZXUt+jePmfwhduiSDATxjI hmjmssyUVRUMTnOQXrsVfSwFjRVc2j48tdO5g7N47HEOfxxk3bMOi/V8NdC0EcaYZe SNcrs5ev+Eah2giyTSinf7If+Wg03UPCIJUdCL1g= Date: Tue, 07 Sep 2021 19:53:25 -0700 From: Andrew Morton To: akpm@linux-foundation.org, bigeasy@linutronix.de, brouer@redhat.com, cl@linux.com, efault@gmx.de, iamjoonsoo.kim@lge.com, jannh@google.com, linux-mm@kvack.org, mgorman@techsingularity.net, mm-commits@vger.kernel.org, penberg@kernel.org, quic_qiancai@quicinc.com, rientjes@google.com, tglx@linutronix.de, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 009/147] mm, slub: restructure new page checks in ___slab_alloc() Message-ID: <20210908025325.k8MQE96j7%akpm@linux-foundation.org> In-Reply-To: <20210907195226.14b1d22a07c085b22968b933@linux-foundation.org> User-Agent: s-nail v14.8.16 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b="H/7PRxb6"; spf=pass (imf10.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: BAE7E6001983 X-Stat-Signature: fmxeau5mjartiksg8sp96fnmje39kek1 X-HE-Tag: 1631069606-231162 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vlastimil Babka Subject: mm, slub: restructure new page checks in ___slab_alloc() When we allocate slab object from a newly acquired page (from node's partial list or page allocator), we usually also retain the page as a new percpu slab. There are two exceptions - when pfmemalloc status of the page doesn't match our gfp flags, or when the cache has debugging enabled. The current code for these decisions is not easy to follow, so restructure it and add comments. The new structure will also help with the following changes. No functional change. Link: https://lkml.kernel.org/r/20210904105003.11688-10-vbabka@suse.cz Signed-off-by: Vlastimil Babka Acked-by: Mel Gorman Cc: Christoph Lameter Cc: David Rientjes Cc: Jann Horn Cc: Jesper Dangaard Brouer Cc: Joonsoo Kim Cc: Mike Galbraith Cc: Pekka Enberg Cc: Qian Cai Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Signed-off-by: Andrew Morton --- mm/slub.c | 28 ++++++++++++++++++++++------ 1 file changed, 22 insertions(+), 6 deletions(-) --- a/mm/slub.c~mm-slub-restructure-new-page-checks-in-___slab_alloc +++ a/mm/slub.c @@ -2782,13 +2782,29 @@ new_slab: c->page = page; check_new_page: - if (likely(!kmem_cache_debug(s) && pfmemalloc_match(page, gfpflags))) - goto load_freelist; - /* Only entered in the debug case */ - if (kmem_cache_debug(s) && - !alloc_debug_processing(s, page, freelist, addr)) - goto new_slab; /* Slab failed checks. Next slab needed */ + if (kmem_cache_debug(s)) { + if (!alloc_debug_processing(s, page, freelist, addr)) + /* Slab failed checks. Next slab needed */ + goto new_slab; + else + /* + * For debug case, we don't load freelist so that all + * allocations go through alloc_debug_processing() + */ + goto return_single; + } + + if (unlikely(!pfmemalloc_match(page, gfpflags))) + /* + * For !pfmemalloc_match() case we don't load freelist so that + * we don't make further mismatched allocations easier. + */ + goto return_single; + + goto load_freelist; + +return_single: deactivate_slab(s, page, get_freepointer(s, freelist), c); return freelist; _