From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752970AbbFEME4 (ORCPT ); Fri, 5 Jun 2015 08:04:56 -0400 Received: from mail-pd0-f174.google.com ([209.85.192.174]:33901 "EHLO mail-pd0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932118AbbFEMEw (ORCPT ); Fri, 5 Jun 2015 08:04:52 -0400 From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky , Sergey Senozhatsky Subject: [RFC][PATCHv2 2/8] zsmalloc: partial page ordering within a fullness_list Date: Fri, 5 Jun 2015 21:03:52 +0900 Message-Id: <1433505838-23058-3-git-send-email-sergey.senozhatsky@gmail.com> X-Mailer: git-send-email 2.4.2.387.gf86f31a In-Reply-To: <1433505838-23058-1-git-send-email-sergey.senozhatsky@gmail.com> References: <1433505838-23058-1-git-send-email-sergey.senozhatsky@gmail.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We want to see more ZS_FULL pages and less ZS_ALMOST_{FULL, EMPTY} pages. Put a page with higher ->inuse count first within its ->fullness_list, which will give us better chances to fill up this page with new objects (find_get_zspage() return ->fullness_list head for new object allocation), so some zspages will become ZS_ALMOST_FULL/ZS_FULL quicker. It performs a trivial and cheap ->inuse compare which does not slow down zsmalloc, and in the worst case it keeps the list pages not in any particular order, just like we do it now. A more expensive solution could sort fullness_list by ->inuse count. Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index ce3310c..cd37bda 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -658,8 +658,16 @@ static void insert_zspage(struct page *page, struct size_class *class, return; head = &class->fullness_list[fullness]; - if (*head) - list_add_tail(&page->lru, &(*head)->lru); + if (*head) { + /* + * We want to see more ZS_FULL pages and less almost + * empty/full. Put pages with higher ->inuse first. + */ + if (page->inuse < (*head)->inuse) + list_add_tail(&page->lru, &(*head)->lru); + else + list_add(&page->lru, &(*head)->lru); + } *head = page; zs_stat_inc(class, fullness == ZS_ALMOST_EMPTY ? -- 2.4.2.387.gf86f31a From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pd0-f182.google.com (mail-pd0-f182.google.com [209.85.192.182]) by kanga.kvack.org (Postfix) with ESMTP id 5DAB4900016 for ; Fri, 5 Jun 2015 08:04:53 -0400 (EDT) Received: by pdjm12 with SMTP id m12so52220774pdj.3 for ; Fri, 05 Jun 2015 05:04:53 -0700 (PDT) Received: from mail-pd0-x22b.google.com (mail-pd0-x22b.google.com. [2607:f8b0:400e:c02::22b]) by mx.google.com with ESMTPS id oj16si10640797pdb.160.2015.06.05.05.04.52 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 05 Jun 2015 05:04:52 -0700 (PDT) Received: by pdjn11 with SMTP id n11so14449065pdj.0 for ; Fri, 05 Jun 2015 05:04:52 -0700 (PDT) From: Sergey Senozhatsky Subject: [RFC][PATCHv2 2/8] zsmalloc: partial page ordering within a fullness_list Date: Fri, 5 Jun 2015 21:03:52 +0900 Message-Id: <1433505838-23058-3-git-send-email-sergey.senozhatsky@gmail.com> In-Reply-To: <1433505838-23058-1-git-send-email-sergey.senozhatsky@gmail.com> References: <1433505838-23058-1-git-send-email-sergey.senozhatsky@gmail.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton , Minchan Kim Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky , Sergey Senozhatsky We want to see more ZS_FULL pages and less ZS_ALMOST_{FULL, EMPTY} pages. Put a page with higher ->inuse count first within its ->fullness_list, which will give us better chances to fill up this page with new objects (find_get_zspage() return ->fullness_list head for new object allocation), so some zspages will become ZS_ALMOST_FULL/ZS_FULL quicker. It performs a trivial and cheap ->inuse compare which does not slow down zsmalloc, and in the worst case it keeps the list pages not in any particular order, just like we do it now. A more expensive solution could sort fullness_list by ->inuse count. Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index ce3310c..cd37bda 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -658,8 +658,16 @@ static void insert_zspage(struct page *page, struct size_class *class, return; head = &class->fullness_list[fullness]; - if (*head) - list_add_tail(&page->lru, &(*head)->lru); + if (*head) { + /* + * We want to see more ZS_FULL pages and less almost + * empty/full. Put pages with higher ->inuse first. + */ + if (page->inuse < (*head)->inuse) + list_add_tail(&page->lru, &(*head)->lru); + else + list_add(&page->lru, &(*head)->lru); + } *head = page; zs_stat_inc(class, fullness == ZS_ALMOST_EMPTY ? -- 2.4.2.387.gf86f31a -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org