All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Michael Ellerman <mpe@ellerman.id.au>
To: Will Deacon <will.deacon@arm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	"linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul McKenney <paulmck@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>
Subject: Re: [RFC PATCH v2] memory-barriers: remove smp_mb__after_unlock_lock()
Date: Thu, 16 Jul 2015 12:00:28 +1000	[thread overview]
Message-ID: <1437012028.28475.2.camel@ellerman.id.au> (raw)
In-Reply-To: <20150715104420.GD1005@arm.com>

On Wed, 2015-07-15 at 11:44 +0100, Will Deacon wrote:
> Hi Michael,
> 
> On Wed, Jul 15, 2015 at 04:06:18AM +0100, Michael Ellerman wrote:
> > On Tue, 2015-07-14 at 08:31 +1000, Benjamin Herrenschmidt wrote:
> > > On Mon, 2015-07-13 at 13:15 +0100, Will Deacon wrote:
> > > > This didn't go anywhere last time I posted it, but here it is again.
> > > > I'd really appreciate some feedback from the PowerPC guys, especially as
> > > > to whether this change requires them to add an additional barrier in
> > > > arch_spin_unlock and what the cost of that would be.
> > > 
> > > We'd have to turn the lwsync in unlock or the isync in lock into a full
> > > barrier. As it is, we *almost* have a full barrier semantic, but not
> > > quite, as in things can get mixed up inside spin_lock between the LL and
> > > the SC (things leaking in past LL and things leaking "out" up before SC
> > > and then getting mixed up in there).
> > > 
> > > Michael, at some point you were experimenting a bit with that and tried
> > > to get some perf numbers of the impact that would have, did that
> > > solidify ? Otherwise, I'll have a look when I'm back next week.
> > 
> > I was mainly experimenting with replacing the lwsync in lock with an isync.
> > 
> > But I think you're talking about making it a full sync in lock.
> > 
> > That was about +7% on p8, +25% on p7 and +88% on p6.
> 
> Ok, so that's overhead incurred by moving from isync -> lwsync? The numbers
> look quite large...

Sorry no.

Currently we use lwsync in lock. You'll see isync in the code
(PPC_ACQUIRE_BARRIER), but on most modern CPUs that is patched at runtime to
lwsync.

I benchmarked lwsync (current), isync (proposed at the time) and sync (just for
comparison).

The numbers above are going from lwsync -> sync, as I thought that was what Ben
was talking about.

Going from lwsync to isync was actually a small speedup on power8, which was
surprising.

> > We got stuck deciding whether isync was safe to use as a memory barrier,
> > because the wording in the arch is a bit vague.
> > 
> > But if we're talking about a full sync then I think there is no question that's
> > OK and we should just do it.
> 
> Is this because there's a small overhead from lwsync -> sync? Just want to
> make sure I follow your reasoning.

No I mean that sync is clearly a memory barrier. The issue with switching to
isync in lock was that it's not a memory barrier per se, so we were not 100%
confident in it.

> If you went the way of sync in unlock, could you remove the conditional
> SYNC_IO stuff?

Yeah we could, it's just a conditional sync in unlock when mmio has been done.

That would fix the problem with smp_mb__after_unlock_lock(), but not the
original worry we had about loads happening before the SC in lock.

cheers



  reply	other threads:[~2015-07-16  2:00 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-13 12:15 [RFC PATCH v2] memory-barriers: remove smp_mb__after_unlock_lock() Will Deacon
2015-07-13 13:09 ` Peter Hurley
2015-07-13 14:24   ` Will Deacon
2015-07-13 15:56     ` Peter Zijlstra
2015-07-13 13:11 ` Peter Zijlstra
2015-07-13 14:09   ` Will Deacon
2015-07-13 14:21     ` Will Deacon
2015-07-13 15:54       ` Peter Zijlstra
2015-07-13 17:50         ` Will Deacon
2015-07-13 20:20           ` Paul E. McKenney
2015-07-13 22:23             ` Peter Zijlstra
2015-07-13 23:04               ` Paul E. McKenney
2015-07-14 10:04                 ` Will Deacon
2015-07-14 12:45                   ` Paul E. McKenney
2015-07-14 12:51                     ` Will Deacon
2015-07-14 14:00                       ` Paul E. McKenney
2015-07-14 14:12                         ` Will Deacon
2015-07-14 19:31                           ` Paul E. McKenney
2015-07-15  1:38                             ` Paul E. McKenney
2015-07-15 10:51                               ` Will Deacon
2015-07-15 13:12                                 ` Paul E. McKenney
2015-07-24 11:31                                   ` Will Deacon
2015-07-24 15:30                                     ` Paul E. McKenney
2015-08-12 13:44                                       ` Will Deacon
2015-08-12 15:43                                         ` Paul E. McKenney
2015-08-12 17:59                                           ` Paul E. McKenney
2015-08-13 10:49                                             ` Will Deacon
2015-08-13 13:10                                               ` Paul E. McKenney
2015-08-17  4:06                                           ` Michael Ellerman
2015-08-17  6:15                                             ` Paul E. McKenney
2015-08-17  8:57                                               ` Will Deacon
2015-08-18  1:50                                                 ` Michael Ellerman
2015-08-18  8:37                                                   ` Will Deacon
2015-08-20  9:45                                                     ` Michael Ellerman
2015-08-20 15:56                                                       ` Will Deacon
2015-08-26  0:27                                                         ` Paul E. McKenney
2015-08-26  4:06                                                           ` Michael Ellerman
2015-07-13 18:23         ` Paul E. McKenney
2015-07-13 19:41           ` Peter Hurley
2015-07-13 20:16             ` Paul E. McKenney
2015-07-13 22:15               ` Peter Zijlstra
2015-07-13 22:43                 ` Benjamin Herrenschmidt
2015-07-14  8:34                   ` Peter Zijlstra
2015-07-13 22:53                 ` Paul E. McKenney
2015-07-13 22:37         ` Benjamin Herrenschmidt
2015-07-13 22:31 ` Benjamin Herrenschmidt
2015-07-14 10:16   ` Will Deacon
2015-07-15  3:06   ` Michael Ellerman
2015-07-15 10:44     ` Will Deacon
2015-07-16  2:00       ` Michael Ellerman [this message]
2015-07-16  5:03         ` Benjamin Herrenschmidt
2015-07-16  5:14           ` Benjamin Herrenschmidt
2015-07-16 15:11             ` Paul E. McKenney
2015-07-16 22:54               ` Benjamin Herrenschmidt
2015-07-17  9:32                 ` Will Deacon
2015-07-17 10:15                   ` Peter Zijlstra
2015-07-17 12:40                     ` Paul E. McKenney
2015-07-17 22:14                   ` Benjamin Herrenschmidt
2015-07-20 13:39                     ` Will Deacon
2015-07-20 13:48                       ` Paul E. McKenney
2015-07-20 13:56                         ` Will Deacon
2015-07-20 21:18                       ` Benjamin Herrenschmidt
2015-07-22 16:49                         ` Will Deacon
2015-07-22 16:49                           ` Will Deacon
2015-07-22 16:49                           ` Will Deacon
2015-09-01  2:57             ` Paul Mackerras
2015-07-15 14:18     ` Paul E. McKenney
2015-07-16  1:34       ` Michael Ellerman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1437012028.28475.2.camel@ellerman.id.au \
    --to=mpe@ellerman.id.au \
    --cc=benh@kernel.crashing.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.