From: Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
To: Tejun Heo <tj@kernel.org>
Cc: "dm-devel@lists.linux.dev" <dm-devel@lists.linux.dev>,
Mike Snitzer <snitzer@kernel.org>,
Damien Le Moal <dlemoal@kernel.org>
Subject: Re: [PATCH] dm zoned: Drop the WQ_UNBOUND flag for the chunk workqueue
Date: Thu, 25 Apr 2024 11:14:17 +0000 [thread overview]
Message-ID: <z2xd4nr42remngtlea7zpkt6vwkh5onsltvo2zsngbhp6374mw@ofw4w2mwtxce> (raw)
In-Reply-To: <Zih-HANo8RV6L-4Y@slm.duckdns.org>
On Apr 23, 2024 / 17:35, Tejun Heo wrote:
> Hello,
>
> On Wed, Apr 24, 2024 at 12:36:45AM +0000, Shinichiro Kawasaki wrote:
> > Hello Tejun, thanks for the fix. I confirmed that the number of in-flight works
> > become larger than 8 during the unmount operation.
> >
> > total infl CPUtime CPUitsv CMW/RPR mayday rescued
> > dmz_cwq_dmz_dml_072 613 33 4.6 - 0 0 0
> >
> > total infl CPUtime CPUitsv CMW/RPR mayday rescued
> > dmz_cwq_dmz_dml_072 617 33 4.7 - 0 0 0
> >
> > total infl CPUtime CPUitsv CMW/RPR mayday rescued
> > dmz_cwq_dmz_dml_072 619 33 4.8 - 0 0 0
> >
> >
> > Also I measured xfs unmount time of 10 times. Its avarage is as follows.
> >
> > Kernel | Unmount time
> > ----------------------+--------------
> > v6.8 | 29m 3s
> > v6.9-rc2 | 34m 17s
> > v6.9-rc2 + Tejun fix | 30m 55s
> >
> > We can see that the fix reduced the unmount time, which is great! Still there is
> > a gap from v6.8 kernel. I think this gap can be a left work, and hope the fix
> > patch to be upstreamed.
>
> So, v6.8 was buggy in that it allowed a lot higher concurrency than
> requested. If higher concurrency is desirable, you can just set @max_active
> to a higher number. Currently, the absolute maximum is 512 (WQ_MAX_ACTIVE)
> but that number was picked arbitrarily probably more than a decade ago, so
> if you find higher concurrency useful, we can definitely push up that
> number.
Thanks for the explanation. Now I see why the gap exists between v6.8 kernel
and v6.9-rc2 kernel + fix. To confirm it, I tried @max_active = WQ_MAX_ACTIVE on
top of the fix, and got the xfs unmount time (10 times average).
Kernel | Unmount time
-------------------------------------------------+--------------
v6.9-rc2 + Tejun fix + @max_active WQ_MAX_ACTIVE | 29m 27s
It looks comparable with v6.8 kernel :)
At this point, I think the default @max_active is enough. If anyone requests
better dm-zoned performance, I will revisit the improvement opportunity by the
larger @max_active.
prev parent reply other threads:[~2024-04-25 11:14 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-10 8:45 [PATCH] dm zoned: Drop the WQ_UNBOUND flag for the chunk workqueue Shin'ichiro Kawasaki
2024-04-10 17:51 ` Tejun Heo
2024-04-11 4:38 ` Shinichiro Kawasaki
2024-04-12 17:25 ` Tejun Heo
2024-04-15 2:24 ` Shinichiro Kawasaki
2024-04-23 0:43 ` Tejun Heo
2024-04-23 16:31 ` Tejun Heo
2024-04-24 0:36 ` Shinichiro Kawasaki
2024-04-24 3:35 ` Tejun Heo
2024-04-25 11:14 ` Shinichiro Kawasaki [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=z2xd4nr42remngtlea7zpkt6vwkh5onsltvo2zsngbhp6374mw@ofw4w2mwtxce \
--to=shinichiro.kawasaki@wdc.com \
--cc=dlemoal@kernel.org \
--cc=dm-devel@lists.linux.dev \
--cc=snitzer@kernel.org \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).