From: Dave Chinner <david@fromorbit.com>
To: Kent Overstreet <kent.overstreet@linux.dev>
Cc: Waiman Long <longman@redhat.com>,
linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org,
linux-cachefs@redhat.com, dhowells@redhat.com,
gfs2@lists.linux.dev, dm-devel@lists.linux.dev,
linux-security-module@vger.kernel.org, selinux@vger.kernel.org,
linux-kernel@vger.kernel.org, Jan Kara <jack@suse.cz>
Subject: Re: [PATCH 04/11] lib/dlock-list: Make sibling CPUs share the same linked list
Date: Thu, 7 Dec 2023 17:25:44 +1100 [thread overview]
Message-ID: <ZXFlaN8FCC8ryr/R@dread.disaster.area> (raw)
In-Reply-To: <20231207054259.gpx3cydlb6b7raax@moria.home.lan>
On Thu, Dec 07, 2023 at 12:42:59AM -0500, Kent Overstreet wrote:
> On Wed, Dec 06, 2023 at 05:05:33PM +1100, Dave Chinner wrote:
> > From: Waiman Long <longman@redhat.com>
> >
> > The dlock list needs one list for each of the CPUs available. However,
> > for sibling CPUs, they are sharing the L2 and probably L1 caches
> > too. As a result, there is not much to gain in term of avoiding
> > cacheline contention while increasing the cacheline footprint of the
> > L1/L2 caches as separate lists may need to be in the cache.
> >
> > This patch makes all the sibling CPUs share the same list, thus
> > reducing the number of lists that need to be maintained in each
> > dlock list without having any noticeable impact on performance. It
> > also improves dlock list iteration performance as fewer lists need
> > to be iterated.
>
> Seems Waiman was missed on the CC
Oops, I knew I missed someone important....
> it looks like there's some duplication of this with list_lru
> functionality - similar list-sharded-by-node idea.
For completely different reasons. The list_lru is aligned to the mm
zone architecture which only partitions memory management accounting
and scanning actions down into NUMA nodes. It's also a per-node
ordered list (LRU), and it has intricate locking semantics that
expose internal list locks to external isolation functions that can
be called whilst a lock protected traversal is in progress.
Further, we have to consider that list-lru is tightly tied to
memcgs. For a single NUMA- and memcg- aware list-lru, there is
actually nr_memcgs * nr_nodes LRUs per list. The memory footprint of
a superblock list_lru gets quite gigantic when we start talking
about machines with hundreds of nodes running tens of thousands of
containers each with tens of superblocks.
That's the biggest problem with using a more memory expensive
structure for the list_lru - we're talking gigabytes of memory just
for the superblock shrinker tracking structure overhead on large
machines. This was one of the reasons why we haven't tried to make
list_lrus any more fine-grained that it absolutely needs to be to
provide acceptible scalability.
> list_lru does the sharding by page_to_nid() of the item, which
> saves a pointer and allows just using a list_head in the item.
> OTOH, it's less granular than what dlock-list is doing?
Sure, but there's a lot more to list_lrus than it being a "per-node
list". OTOH, dlock_list is really nothing more than a "per-cpu
list"....
> I think some attempt ought to be made to factor out the common
> ideas hear; perhaps reworking list_lru to use this thing, and I
> hope someone has looked at the page_nid idea vs. dlock_list using
> the current core.
I certainly have, and I haven't been able to justify the additional
memory footprint of a dlock_list over the existing per-node lists.
That may change given that XFS appears to be on the theshold of
per-node list-lru lock breakdown at 64 threads, but there's a lot
more to consider from a system perspective here than just
inode/dentry cache scalability....
-Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2023-12-07 6:25 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-06 6:05 [PATCH 0/11] vfs: inode cache scalability improvements Dave Chinner
2023-12-06 6:05 ` [PATCH 01/11] lib/dlock-list: Distributed and lock-protected lists Dave Chinner
2023-12-07 2:23 ` Al Viro
2023-12-06 6:05 ` [PATCH 02/11] vfs: Remove unnecessary list_for_each_entry_safe() variants Dave Chinner
2023-12-07 2:26 ` Al Viro
2023-12-07 4:18 ` Kent Overstreet
2023-12-06 6:05 ` [PATCH 03/11] vfs: Use dlock list for superblock's inode list Dave Chinner
2023-12-07 2:40 ` Al Viro
2023-12-07 4:59 ` Dave Chinner
2023-12-07 5:03 ` Kent Overstreet
2023-12-06 6:05 ` [PATCH 04/11] lib/dlock-list: Make sibling CPUs share the same linked list Dave Chinner
2023-12-07 4:31 ` Kent Overstreet
2023-12-07 5:42 ` Kent Overstreet
2023-12-07 6:25 ` Dave Chinner [this message]
2023-12-07 6:49 ` Al Viro
2023-12-06 6:05 ` [PATCH 05/11] selinux: use dlist for isec inode list Dave Chinner
2023-12-06 21:52 ` Paul Moore
2023-12-06 23:04 ` Dave Chinner
2023-12-07 0:36 ` Paul Moore
2023-12-06 6:05 ` [PATCH 06/11] vfs: factor out inode hash head calculation Dave Chinner
2023-12-07 3:02 ` Al Viro
2023-12-06 6:05 ` [PATCH 07/11] hlist-bl: add hlist_bl_fake() Dave Chinner
2023-12-07 3:05 ` Al Viro
2023-12-06 6:05 ` [PATCH 08/11] vfs: inode cache conversion to hash-bl Dave Chinner
2023-12-07 4:58 ` Kent Overstreet
2023-12-07 6:03 ` Dave Chinner
2023-12-07 6:42 ` Al Viro
2023-12-06 6:05 ` [PATCH 09/11] hash-bl: explicitly initialise hash-bl heads Dave Chinner
2023-12-07 3:15 ` Al Viro
2023-12-06 6:05 ` [PATCH 10/11] list_bl: don't use bit locks for PREEMPT_RT or lockdep Dave Chinner
2023-12-07 4:16 ` Kent Overstreet
2023-12-07 4:41 ` Dave Chinner
2023-12-06 6:05 ` [PATCH 11/11] hlist-bl: introduced nested locking for dm-snap Dave Chinner
2023-12-07 17:08 ` [PATCH 0/11] vfs: inode cache scalability improvements Kent Overstreet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZXFlaN8FCC8ryr/R@dread.disaster.area \
--to=david@fromorbit.com \
--cc=dhowells@redhat.com \
--cc=dm-devel@lists.linux.dev \
--cc=gfs2@lists.linux.dev \
--cc=jack@suse.cz \
--cc=kent.overstreet@linux.dev \
--cc=linux-block@vger.kernel.org \
--cc=linux-cachefs@redhat.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-security-module@vger.kernel.org \
--cc=longman@redhat.com \
--cc=selinux@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).