kdevops.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Chuck Lever <chuck.lever@oracle.com>
To: Jeff Layton <jlayton@kernel.org>
Cc: kdevops@lists.linux.dev
Subject: Re: [PATCH 8/8] nfsd: default to a more sane number of threads
Date: Wed, 20 Mar 2024 11:15:44 -0400	[thread overview]
Message-ID: <Zfr9oGovJKFGNf3q@manet.1015granger.net> (raw)
In-Reply-To: <cc176de1940179f59087f06e8158dae50090f357.camel@kernel.org>

On Wed, Mar 20, 2024 at 02:48:37PM +0000, Jeff Layton wrote:
> On Wed, 2024-03-20 at 10:10 -0400, Chuck Lever wrote:
> > On Wed, Mar 20, 2024 at 09:11:31AM -0400, Jeff Layton wrote:
> > > Add a small script that can estimate the number of threads for
> > > nfsd. We default to 32 threads for every GB of guest memory.
> > > 
> > > Signed-off-by: Jeff Layton <jlayton@kernel.org>
> > > ---
> > >  kconfigs/Kconfig.nfsd      |  2 +-
> > >  scripts/nfsd-thread-est.sh | 15 +++++++++++++++
> > >  2 files changed, 16 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/kconfigs/Kconfig.nfsd b/kconfigs/Kconfig.nfsd
> > > index dec98c1e964f..7c28ad98955a 100644
> > > --- a/kconfigs/Kconfig.nfsd
> > > +++ b/kconfigs/Kconfig.nfsd
> > > @@ -69,7 +69,7 @@ config NFSD_EXPORT_OPTIONS
> > >  
> > >  config NFSD_THREADS
> > >  	int "Number of nfsd threads to spawn"
> > > -	default 8
> > > +	default $(shell, scripts/nfsd-thread-est.sh)
> > >  	help
> > >  	  Number of nfsd threads to start up for testing.
> > >  
> > > diff --git a/scripts/nfsd-thread-est.sh b/scripts/nfsd-thread-est.sh
> > > new file mode 100755
> > > index 000000000000..dc5a1deb1215
> > > --- /dev/null
> > > +++ b/scripts/nfsd-thread-est.sh
> > > @@ -0,0 +1,15 @@
> > > +#!/bin/bash
> > > +#
> > > +# The default number of 8 nfsd threads is pitifully low!
> > > +#
> > > +#
> > > +# Each nfsd thread consumes ~1MB of memory, long-term:
> > > +#
> > > +# stack, kthread overhead, svc_rqst, and the rq_pages array, plus
> > > +# temporary working space.
> > > +#
> > > +# Allow 32 threads for each GB of guest memory:
> > 
> > I agree that 8 is a small number, but that multiplier seems
> > excessive. On my 16GB test systems, that would create 512 threads.
> > I set the nfsd thread count to 32 on my test NFS server, and I've
> > never seen more than 10-15 threads for any test. (OK, maybe I should
> > run tests that drive the test server harder).
> 
> How are you measuring that?

I watch "top" while the tests are running. You can see CPU time
accumulating for nfsd threads as they rise to the top of the display
window. I see between ten and fifteen nfsd threads in that window
after a test run is done. Other nfsd threads remain unused.

It's the unused threads that worry me -- that will be memory rather
permanently tied up in rq_pages that isn't serving any useful
purpose. We don't yet have a shrinker for reducing the thread
count, for instance.


> I'm open to a different formula. This one is
> based on a total SWAG.

So I think your rationale and formula works as a /maximum/ thread
count. But distributions can't assume that NFSD is going to be
the only memory consumer on a system, thus taking that much memory
for nominally idle nfsd threads seems undesirable.

I've toyed with the idea of 3 * CPU-core-count as a rough default
number of threads, though it really depends on NFS workload and the
speed of permanent storage. Sigh.


> > Plus, no distribution uses that configuration. Maybe we should stick
> > to what's a common deployment scenario?
> > 
> 
> They don't use that configuration because rpc.nfsd still defaults to 8.
> Admins have to _know_ to increase the thread count, which sort of sucks.
> I'd like to make that more dynamic, but that's a separate project.
> Still, maybe worthwhile to look at soon...

I agree that increasing the default thread count is a super idea,
but I don't think we know enough yet to pick a new number. No doubt
a fixed value of 8 is too low as a distribution default. Yes, they
will look to us to provide a strong rationale for a new choice.


> > Do you have data that shows creating more threads improves test run
> > time?
> > 
> 
> No, but I think the overhead from that number of threads is pretty
> minimal and the whole purpose of the kdevops nfsd host is to serve NFS
> requests. Also, this _is_ just a default.
> 
> BTW, do we collect stats on how often we can't find a thread to run? I
> guess that's all the svc_defer stuff?

Good questions...

I had added a trace point at one time, but Neil removed all that
when he rewrote the svc thread scheduler last year.

We also used to have a thread histogram, but that's also been
removed for reasons.

It might be nice if we can find a generic tool for looking at thread
utilization so we don't have to build and maintain our own.


-- 
Chuck Lever

  reply	other threads:[~2024-03-20 15:16 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-20 13:11 [PATCH 0/8] kdevops: random fixes Jeff Layton
2024-03-20 13:11 ` [PATCH 1/8] fstests: remove the nfs_default guest Jeff Layton
2024-03-20 13:11 ` [PATCH 2/8] fstests: add an option for testing nfs over rdma Jeff Layton
2024-03-20 13:11 ` [PATCH 3/8] fstests: enable the "codeready" repo on RHEL and CentOS Jeff Layton
2024-03-20 13:11 ` [PATCH 4/8] kotd: display the running kernel version after updating Jeff Layton
2024-03-20 13:11 ` [PATCH 5/8] devconfig: run it on nfsd in addition to the all group Jeff Layton
2024-03-20 13:11 ` [PATCH 6/8] devconfig: don't install dump on redhat family hosts Jeff Layton
2024-03-20 13:11 ` [PATCH 7/8] redhat: fix up some install-deps cases for CentOS Jeff Layton
2024-03-20 13:11 ` [PATCH 8/8] nfsd: default to a more sane number of threads Jeff Layton
2024-03-20 14:10   ` Chuck Lever
2024-03-20 14:48     ` Jeff Layton
2024-03-20 15:15       ` Chuck Lever [this message]
2024-03-20 16:56         ` Jeff Layton
2024-03-20 14:11 ` [PATCH 0/8] kdevops: random fixes Chuck Lever
2024-03-20 14:29   ` Jeff Layton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zfr9oGovJKFGNf3q@manet.1015granger.net \
    --to=chuck.lever@oracle.com \
    --cc=jlayton@kernel.org \
    --cc=kdevops@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).