From: "Jan Beulich" <JBeulich@suse.com>
To: qemu-devel@nongnu.org
Cc: xen-devel <xen-devel@lists.xenproject.org>,
Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [PATCH] xen/HVM: atomically access pointers in bufioreq handling
Date: Thu, 18 Jun 2015 14:18:39 +0100 [thread overview]
Message-ID: <5582E14F0200007800086A37__34084.985117852$1434633638$gmane$org@mail.emea.novell.com> (raw)
[-- Attachment #1: Type: text/plain, Size: 3124 bytes --]
The number of slots per page being 511 (i.e. not a power of two) means
that the (32-bit) read and write indexes going beyond 2^32 will likely
disturb operation. The hypervisor side gets I/O req server creation
extended so we can indicate that we're using suitable atomic accesses
where needed (not all accesses to the two pointers really need to be
atomic), allowing it to atomically canonicalize both pointers when both
have gone through at least one cycle.
The Xen side counterpart (which is not a functional prereq to this
change, albeit a build one) can be found at e.g.
http://lists.xenproject.org/archives/html/xen-devel/2015-06/msg02996.html
Signed-off-by: Jan Beulich <jbeulich@suse.com>
--- a/xen-hvm.c
+++ b/xen-hvm.c
@@ -981,19 +981,30 @@ static void handle_ioreq(XenIOState *sta
static int handle_buffered_iopage(XenIOState *state)
{
+ buffered_iopage_t *buf_page = state->buffered_io_page;
buf_ioreq_t *buf_req = NULL;
ioreq_t req;
int qw;
- if (!state->buffered_io_page) {
+ if (!buf_page) {
return 0;
}
memset(&req, 0x00, sizeof(req));
- while (state->buffered_io_page->read_pointer != state->buffered_io_page->write_pointer) {
- buf_req = &state->buffered_io_page->buf_ioreq[
- state->buffered_io_page->read_pointer % IOREQ_BUFFER_SLOT_NUM];
+ for (;;) {
+ uint32_t rdptr = buf_page->read_pointer, wrptr;
+
+ xen_rmb();
+ wrptr = buf_page->write_pointer;
+ xen_rmb();
+ if (rdptr != buf_page->read_pointer) {
+ continue;
+ }
+ if (rdptr == wrptr) {
+ break;
+ }
+ buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
req.size = 1UL << buf_req->size;
req.count = 1;
req.addr = buf_req->addr;
@@ -1005,15 +1016,14 @@ static int handle_buffered_iopage(XenIOS
req.data_is_ptr = 0;
qw = (req.size == 8);
if (qw) {
- buf_req = &state->buffered_io_page->buf_ioreq[
- (state->buffered_io_page->read_pointer + 1) % IOREQ_BUFFER_SLOT_NUM];
+ buf_req = &buf_page->buf_ioreq[(rdptr + 1) %
+ IOREQ_BUFFER_SLOT_NUM];
req.data |= ((uint64_t)buf_req->data) << 32;
}
handle_ioreq(state, &req);
- xen_mb();
- state->buffered_io_page->read_pointer += qw ? 2 : 1;
+ atomic_add(&buf_page->read_pointer, qw + 1);
}
return req.count;
--- a/include/hw/xen/xen_common.h
+++ b/include/hw/xen/xen_common.h
@@ -370,7 +370,8 @@ static inline void xen_unmap_pcidev(XenX
static inline int xen_create_ioreq_server(XenXC xc, domid_t dom,
ioservid_t *ioservid)
{
- int rc = xc_hvm_create_ioreq_server(xc, dom, 1, ioservid);
+ int rc = xc_hvm_create_ioreq_server(xc, dom, HVM_IOREQSRV_BUFIOREQ_ATOMIC,
+ ioservid);
if (rc == 0) {
trace_xen_ioreq_server_create(*ioservid);
[-- Attachment #2: qemu-bufioreq-atomic-add.patch --]
[-- Type: text/plain, Size: 3177 bytes --]
xen/HVM: atomically access pointers in bufioreq handling
The number of slots per page being 511 (i.e. not a power of two) means
that the (32-bit) read and write indexes going beyond 2^32 will likely
disturb operation. The hypervisor side gets I/O req server creation
extended so we can indicate that we're using suitable atomic accesses
where needed (not all accesses to the two pointers really need to be
atomic), allowing it to atomically canonicalize both pointers when both
have gone through at least one cycle.
The Xen side counterpart (which is not a functional prereq to this
change, albeit a build one) can be found at e.g.
http://lists.xenproject.org/archives/html/xen-devel/2015-06/msg02996.html
Signed-off-by: Jan Beulich <jbeulich@suse.com>
--- a/xen-hvm.c
+++ b/xen-hvm.c
@@ -981,19 +981,30 @@ static void handle_ioreq(XenIOState *sta
static int handle_buffered_iopage(XenIOState *state)
{
+ buffered_iopage_t *buf_page = state->buffered_io_page;
buf_ioreq_t *buf_req = NULL;
ioreq_t req;
int qw;
- if (!state->buffered_io_page) {
+ if (!buf_page) {
return 0;
}
memset(&req, 0x00, sizeof(req));
- while (state->buffered_io_page->read_pointer != state->buffered_io_page->write_pointer) {
- buf_req = &state->buffered_io_page->buf_ioreq[
- state->buffered_io_page->read_pointer % IOREQ_BUFFER_SLOT_NUM];
+ for (;;) {
+ uint32_t rdptr = buf_page->read_pointer, wrptr;
+
+ xen_rmb();
+ wrptr = buf_page->write_pointer;
+ xen_rmb();
+ if (rdptr != buf_page->read_pointer) {
+ continue;
+ }
+ if (rdptr == wrptr) {
+ break;
+ }
+ buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
req.size = 1UL << buf_req->size;
req.count = 1;
req.addr = buf_req->addr;
@@ -1005,15 +1016,14 @@ static int handle_buffered_iopage(XenIOS
req.data_is_ptr = 0;
qw = (req.size == 8);
if (qw) {
- buf_req = &state->buffered_io_page->buf_ioreq[
- (state->buffered_io_page->read_pointer + 1) % IOREQ_BUFFER_SLOT_NUM];
+ buf_req = &buf_page->buf_ioreq[(rdptr + 1) %
+ IOREQ_BUFFER_SLOT_NUM];
req.data |= ((uint64_t)buf_req->data) << 32;
}
handle_ioreq(state, &req);
- xen_mb();
- state->buffered_io_page->read_pointer += qw ? 2 : 1;
+ atomic_add(&buf_page->read_pointer, qw + 1);
}
return req.count;
--- a/include/hw/xen/xen_common.h
+++ b/include/hw/xen/xen_common.h
@@ -370,7 +370,8 @@ static inline void xen_unmap_pcidev(XenX
static inline int xen_create_ioreq_server(XenXC xc, domid_t dom,
ioservid_t *ioservid)
{
- int rc = xc_hvm_create_ioreq_server(xc, dom, 1, ioservid);
+ int rc = xc_hvm_create_ioreq_server(xc, dom, HVM_IOREQSRV_BUFIOREQ_ATOMIC,
+ ioservid);
if (rc == 0) {
trace_xen_ioreq_server_create(*ioservid);
[-- Attachment #3: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next reply other threads:[~2015-06-18 13:18 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-18 13:18 Jan Beulich [this message]
-- strict thread matches above, loose matches on Subject: below --
2015-06-18 13:18 [Qemu-devel] [PATCH] xen/HVM: atomically access pointers in bufioreq handling Jan Beulich
2015-07-21 13:54 ` Stefano Stabellini
2015-07-21 13:54 ` [Qemu-devel] " Stefano Stabellini
2015-07-22 14:03 ` Jan Beulich
2015-07-22 14:50 ` Stefano Stabellini
2015-07-22 14:50 ` [Qemu-devel] " Stefano Stabellini
2015-07-22 15:34 ` Jan Beulich
2015-07-22 15:34 ` [Qemu-devel] " Jan Beulich
2015-07-22 17:24 ` Stefano Stabellini
2015-07-22 17:24 ` [Qemu-devel] " Stefano Stabellini
2015-07-22 17:26 ` Stefano Stabellini
2015-07-23 7:02 ` [Qemu-devel] " Jan Beulich
2015-07-23 10:04 ` Stefano Stabellini
2015-07-23 10:04 ` [Qemu-devel] " Stefano Stabellini
2015-07-23 10:09 ` Stefano Stabellini
2015-07-23 11:20 ` Jan Beulich
2015-07-23 7:02 ` Jan Beulich
2015-07-22 14:03 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='5582E14F0200007800086A37__34084.985117852$1434633638$gmane$org@mail.emea.novell.com' \
--to=jbeulich@suse.com \
--cc=qemu-devel@nongnu.org \
--cc=stefano.stabellini@eu.citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.