All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Yujie Liu <yujie.liu@intel.com>
To: Josh Poimboeuf <jpoimboe@kernel.org>
Cc: <x86@kernel.org>, <linux-kernel@vger.kernel.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Daniel Sneddon <daniel.sneddon@linux.intel.com>,
	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Alexandre Chartre <alexandre.chartre@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Peter Zijlstra <peterz@infradead.org>,
	"Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
	Sean Christopherson <seanjc@google.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Nikolay Borisov <nik.borisov@suse.com>,
	"KP Singh" <kpsingh@kernel.org>, Waiman Long <longman@redhat.com>,
	"Borislav Petkov" <bp@alien8.de>, Ingo Molnar <mingo@kernel.org>
Subject: Re: [PATCH v4 1/5] x86/bugs: Only harden syscalls when needed
Date: Mon, 20 May 2024 13:21:32 +0800	[thread overview]
Message-ID: <Zkrd3Cnh8EBXwGAR@yujie-X299> (raw)
In-Reply-To: <20240507051741.4crk2pd2fuh4euyd@treble>

On Mon, May 06, 2024 at 10:17:41PM -0700, Josh Poimboeuf wrote:
> On Mon, Apr 22, 2024 at 04:09:33PM +0800, Yujie Liu wrote:
> > On Fri, Apr 19, 2024 at 02:09:47PM -0700, Josh Poimboeuf wrote:
> > > Syscall hardening (converting the syscall indirect branch to a series of
> > > direct branches) has shown some performance regressions:
> > >
> > > - Red Hat internal testing showed up to 12% slowdowns in database
> > >   benchmark testing on Sapphire Rapids when the DB was stressed with 80+
> > >   users to cause contention.
> > >
> > > - The kernel test robot's will-it-scale benchmarks showed significant
> > >   regressions on Skylake with IBRS:
> > >   https://lkml.kernel.org/lkml/202404191333.178a0eed-yujie.liu@intel.com
> > 
> > To clarify, we reported a +1.4% improvement (not regression) of
> > will-it-scale futex4 benchmark on Skylake. Meanwhile we did observe some
> > regressions by running other benchmarks on Ice Lake, such as:
> > 
> >     stress-ng.null.ops_per_sec -4.0% regression on Intel Xeon Gold 6346 (Ice Lake)
> >     unixbench.fsbuffer.throughput -1.4% regression on Intel Xeon Gold 6346 (Ice Lake)
> 
> Thanks for clarifying that.  I'm not sure what I was looking at.
> 
> I also saw your email where Ice Lake showed a ~10% regression for
> 1e3ad78334a6.  Unfortunately my patch wouldn't help with that, as it's
> designed to help with older systems (e.g., Skylake) and newer (e.g.,
> Sapphire Rapids) but not Ice/Cascade Lake.
> 
> Whether 1e3ad78334a6 helps or hurts seems very workload-dependent.
> 
> It would be especially interesting to see if my patch helps on the newer
> systems which have the HW mitigation: Raptor Lake, Meteor Lake, Sapphire
> Rapids, Emerald Rapids.
> 
> For now, maybe I'll just table this patch until we have more data.

FYI, we tested this patch on a Sapphire Rapids machine in our test
infrastructure. There are both improvements and regressions across
different benchmarks, which does show that the impact may be very
workload-dependent.

Intel Xeon Platinum 8480CTDX (Sapphire Rapids) with 512G Memory:

+-----------------------+----------+---------------+------------------+
|                       | mainline |  + this patch | (percent change) |
+-----------------------+----------+---------------+------------------+
| will-it-scale.futex4  | 5015892  | 5118744       | +2.05%           |
+-----------------------+----------+---------------+------------------+
| will-it-scale.pread1  | 2720498  | 2721206       | +0.03%           |
+-----------------------+----------+---------------+------------------+
| will-it-scale.pwrite1 | 2143302  | 2164511       | +0.99%           |
+-----------------------+----------+---------------+------------------+
| will-it-scale.poll1   | 4046943  | 4095046       | +1.19%           |
+-----------------------+----------+---------------+------------------+
| stress-ng.null        | 19400    | 18689         | -3.66%           |
+-----------------------+----------+---------------+------------------+
| unixbench.fsbuffer    | 369494   | 364181        | -1.44%           |
+-----------------------+----------+---------------+------------------+

Thanks,
Yujie

  reply	other threads:[~2024-05-20  5:21 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-19 21:09 [PATCH 0/5] x86/bugs: more BHI fixes Josh Poimboeuf
2024-04-19 21:09 ` [PATCH v4 1/5] x86/bugs: Only harden syscalls when needed Josh Poimboeuf
2024-04-22  8:09   ` Yujie Liu
2024-05-07  5:17     ` Josh Poimboeuf
2024-05-20  5:21       ` Yujie Liu [this message]
2024-04-19 21:09 ` [PATCH v4 2/5] cpu/speculation: Fix CPU mitigation defaults for !x86 Josh Poimboeuf
2024-04-20  0:09   ` Sean Christopherson
2024-04-23 14:10     ` Sean Christopherson
2024-04-24  5:35       ` Josh Poimboeuf
2024-04-19 21:09 ` [PATCH v4 3/5] x86/syscall: Mark exit[_group] syscall handlers __noreturn Josh Poimboeuf
2024-04-20 13:58   ` Paul E. McKenney
2024-04-21  5:25     ` Josh Poimboeuf
2024-04-21 20:40       ` Paul McKenney
2024-04-21 21:47         ` Paul McKenney
2024-05-02 23:48           ` Paul McKenney
2024-05-03 15:38             ` Paul E. McKenney
2024-05-03 19:56             ` Josh Poimboeuf
2024-05-03 20:44               ` Josh Poimboeuf
2024-05-03 23:33                 ` Paul E. McKenney
2024-05-03 23:48                   ` Josh Poimboeuf
2024-05-04 16:48                     ` Paul E. McKenney
2024-04-19 21:09 ` [PATCH v4 4/5] x86/bugs: Remove duplicate Spectre cmdline option descriptions Josh Poimboeuf
2024-04-19 21:09 ` [PATCH v4 5/5] x86/bugs: Add 'spectre_bhi=vmexit' cmdline option Josh Poimboeuf
2024-04-19 21:46   ` Josh Poimboeuf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zkrd3Cnh8EBXwGAR@yujie-X299 \
    --to=yujie.liu@intel.com \
    --cc=alexandre.chartre@oracle.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=bp@alien8.de \
    --cc=daniel.sneddon@linux.intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=jpoimboe@kernel.org \
    --cc=konrad.wilk@oracle.com \
    --cc=kpsingh@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mingo@kernel.org \
    --cc=nik.borisov@suse.com \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=peterz@infradead.org \
    --cc=seanjc@google.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.