All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	qemu-devel@nongnu.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Qemu-devel] [PATCH] xen/HVM: atomically access pointers in bufioreq handling
Date: Tue, 21 Jul 2015 14:54:45 +0100	[thread overview]
Message-ID: <alpine.DEB.2.02.1507201723370.17378@kaball.uk.xensource.com> (raw)
In-Reply-To: <5582E14F0200007800086A37@mail.emea.novell.com>

On Thu, 18 Jun 2015, Jan Beulich wrote:
> The number of slots per page being 511 (i.e. not a power of two) means
> that the (32-bit) read and write indexes going beyond 2^32 will likely
> disturb operation. The hypervisor side gets I/O req server creation
> extended so we can indicate that we're using suitable atomic accesses
> where needed (not all accesses to the two pointers really need to be
> atomic), allowing it to atomically canonicalize both pointers when both
> have gone through at least one cycle.

The description is a bit terse: which accesses don't really need to be
atomic?


> The Xen side counterpart (which is not a functional prereq to this
> change, albeit a build one) can be found at e.g.
> http://lists.xenproject.org/archives/html/xen-devel/2015-06/msg02996.html 
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen-hvm.c
> +++ b/xen-hvm.c
> @@ -981,19 +981,30 @@ static void handle_ioreq(XenIOState *sta
>  
>  static int handle_buffered_iopage(XenIOState *state)
>  {
> +    buffered_iopage_t *buf_page = state->buffered_io_page;
>      buf_ioreq_t *buf_req = NULL;
>      ioreq_t req;
>      int qw;
>  
> -    if (!state->buffered_io_page) {
> +    if (!buf_page) {
>          return 0;
>      }
>  
>      memset(&req, 0x00, sizeof(req));
>  
> -    while (state->buffered_io_page->read_pointer != state->buffered_io_page->write_pointer) {
> -        buf_req = &state->buffered_io_page->buf_ioreq[
> -            state->buffered_io_page->read_pointer % IOREQ_BUFFER_SLOT_NUM];
> +    for (;;) {
> +        uint32_t rdptr = buf_page->read_pointer, wrptr;
> +
> +        xen_rmb();

We don't need this barrier.


> +        wrptr = buf_page->write_pointer;
> +        xen_rmb();
> +        if (rdptr != buf_page->read_pointer) {

I think you have to use atomic_read to be sure that the second read to
buf_page->read_pointer is up to date and not optimized away.

But if I think that it would be best to simply use atomic_read to read
both pointers at once using uint64_t as type, so you are sure to get a
consistent view and there is no need for this check.


> +            continue;
> +        }
> +        if (rdptr == wrptr) {
> +            break;
> +        }
> +        buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
>          req.size = 1UL << buf_req->size;
>          req.count = 1;
>          req.addr = buf_req->addr;
> @@ -1005,15 +1016,14 @@ static int handle_buffered_iopage(XenIOS
>          req.data_is_ptr = 0;
>          qw = (req.size == 8);
>          if (qw) {
> -            buf_req = &state->buffered_io_page->buf_ioreq[
> -                (state->buffered_io_page->read_pointer + 1) % IOREQ_BUFFER_SLOT_NUM];
> +            buf_req = &buf_page->buf_ioreq[(rdptr + 1) %
> +                                           IOREQ_BUFFER_SLOT_NUM];
>              req.data |= ((uint64_t)buf_req->data) << 32;
>          }
>  
>          handle_ioreq(state, &req);
>  
> -        xen_mb();
> -        state->buffered_io_page->read_pointer += qw ? 2 : 1;
> +        atomic_add(&buf_page->read_pointer, qw + 1);

I couldn't get specific info on the type of barrier implemented by
__sync_fetch_and_add, so I cannot tell for sure whether removing
xen_mb() is appropriate. Do you have a link? I suspect that given the
strong guarantees of the x86 architecture we'll be fine. I would be less
confident if this code was used on other archs.


>      }
>  
>      return req.count;
> --- a/include/hw/xen/xen_common.h
> +++ b/include/hw/xen/xen_common.h
> @@ -370,7 +370,8 @@ static inline void xen_unmap_pcidev(XenX
>  static inline int xen_create_ioreq_server(XenXC xc, domid_t dom,
>                                            ioservid_t *ioservid)
>  {
> -    int rc = xc_hvm_create_ioreq_server(xc, dom, 1, ioservid);
> +    int rc = xc_hvm_create_ioreq_server(xc, dom, HVM_IOREQSRV_BUFIOREQ_ATOMIC,
> +                                        ioservid);

I am concerned that passing 2 instead of 1 could break older
hypervisors. However handle_bufioreq was never defined as a true
boolean, so maybe it is OK?

The alternative would be to create a xen_xc_hvm_create_ioreq_server
versioned wrapper in include/hw/xen/xen_common.h.



>      if (rc == 0) {
>          trace_xen_ioreq_server_create(*ioservid);

  parent reply	other threads:[~2015-07-21 13:56 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-06-18 13:18 [Qemu-devel] [PATCH] xen/HVM: atomically access pointers in bufioreq handling Jan Beulich
2015-07-21 13:54 ` Stefano Stabellini
2015-07-21 13:54 ` Stefano Stabellini [this message]
2015-07-22 14:03   ` Jan Beulich
2015-07-22 14:03   ` [Qemu-devel] " Jan Beulich
2015-07-22 14:50     ` Stefano Stabellini
2015-07-22 15:34       ` Jan Beulich
2015-07-22 15:34       ` [Qemu-devel] " Jan Beulich
2015-07-22 17:24         ` Stefano Stabellini
2015-07-22 17:24         ` [Qemu-devel] " Stefano Stabellini
2015-07-22 17:26           ` Stefano Stabellini
2015-07-22 17:26           ` Stefano Stabellini
2015-07-23  7:02           ` [Qemu-devel] " Jan Beulich
2015-07-23 10:04             ` Stefano Stabellini
2015-07-23 10:04             ` [Qemu-devel] " Stefano Stabellini
2015-07-23 10:09               ` Stefano Stabellini
2015-07-23 10:09               ` [Qemu-devel] " Stefano Stabellini
2015-07-23 11:20               ` Jan Beulich
2015-07-23 11:20               ` Jan Beulich
2015-07-23  7:02           ` Jan Beulich
2015-07-22 14:50     ` Stefano Stabellini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.02.1507201723370.17378@kaball.uk.xensource.com \
    --to=stefano.stabellini@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=qemu-devel@nongnu.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.