All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv1 0/6] evtchn: Improve scalebility
@ 2015-06-09 14:59 David Vrabel
  2015-06-09 14:59 ` [PATCHv1 1/6] evtchn: profile event channel lock David Vrabel
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: David Vrabel @ 2015-06-09 14:59 UTC (permalink / raw
  To: xen-devel
  Cc: Keir Fraser, Tim Deegan, David Vrabel, Jan Beulich, Ian Campbell

The per-domain event channel lock limits scalability when many VCPUs
are trying to send interdomain events.  A per-channel lock is
introduced eliminating any lock contention when sending an event.

See this graph for the performance improvements:

  http://xenbits.xen.org/people/dvrabel/evtchn-scalability.png

A different test (using Linux's evtchn device which masks/unmasks
event channels) showed the following lock profile improvements:

Per-domain lock:
(XEN)   lock:    69267976(00000004:19830041), block:    27777407(00000002:3C7C5C96)

Per-event channel lock 
(XEN)   lock:      686530(00000000:076AF5F6), block:        1787(00000000:000B4D22)

Locking removed from evtchn_unmask():
(XEN)   lock:       10769(00000000:00512999), block:          99(00000000:00009491)

David

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCHv1 1/6] evtchn: profile event channel lock
  2015-06-09 14:59 [PATCHv1 0/6] evtchn: Improve scalebility David Vrabel
@ 2015-06-09 14:59 ` David Vrabel
  2015-06-09 14:59 ` [PATCHv1 2/6] evtchn: factor out freeing an event channel David Vrabel
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: David Vrabel @ 2015-06-09 14:59 UTC (permalink / raw
  To: xen-devel
  Cc: Keir Fraser, Tim Deegan, David Vrabel, Jan Beulich, Ian Campbell

The per-domain event channel lock may suffer from contention.  Add it to
the set of locks to be profiled when lock profiling is enabled.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 xen/common/event_channel.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index fae242d..bf9b2f8 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1251,7 +1251,7 @@ int evtchn_init(struct domain *d)
     if ( !d->evtchn )
         return -ENOMEM;
 
-    spin_lock_init(&d->event_lock);
+    spin_lock_init_prof(d, event_lock);
     if ( get_free_port(d) != 0 )
     {
         free_evtchn_bucket(d, d->evtchn);
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv1 2/6] evtchn: factor out freeing an event channel
  2015-06-09 14:59 [PATCHv1 0/6] evtchn: Improve scalebility David Vrabel
  2015-06-09 14:59 ` [PATCHv1 1/6] evtchn: profile event channel lock David Vrabel
@ 2015-06-09 14:59 ` David Vrabel
  2015-06-09 14:59 ` [PATCHv1 3/6] evtchn: simplify port_is_valid() David Vrabel
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: David Vrabel @ 2015-06-09 14:59 UTC (permalink / raw
  To: xen-devel
  Cc: Keir Fraser, Tim Deegan, David Vrabel, Jan Beulich, Ian Campbell

We're going to want to free an event channel from two places.  Factor out
the code into a free_evtchn() function.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 xen/common/event_channel.c |   20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index bf9b2f8..e4ade17 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -192,6 +192,17 @@ static int get_free_port(struct domain *d)
     return port;
 }
 
+static void free_evtchn(struct domain *d, struct evtchn *chn)
+{
+    /* Clear pending event to avoid unexpected behavior on re-bind. */
+    evtchn_port_clear_pending(d, chn);
+
+    /* Reset binding to vcpu0 when the channel is freed. */
+    chn->state          = ECS_FREE;
+    chn->notify_vcpu_id = 0;
+
+    xsm_evtchn_close_post(chn);
+}
 
 static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 {
@@ -569,14 +580,7 @@ static long __evtchn_close(struct domain *d1, int port1)
         BUG();
     }
 
-    /* Clear pending event to avoid unexpected behavior on re-bind. */
-    evtchn_port_clear_pending(d1, chn1);
-
-    /* Reset binding to vcpu0 when the channel is freed. */
-    chn1->state          = ECS_FREE;
-    chn1->notify_vcpu_id = 0;
-
-    xsm_evtchn_close_post(chn1);
+    free_evtchn(chn1);
 
  out:
     if ( d2 != NULL )
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv1 3/6] evtchn: simplify port_is_valid()
  2015-06-09 14:59 [PATCHv1 0/6] evtchn: Improve scalebility David Vrabel
  2015-06-09 14:59 ` [PATCHv1 1/6] evtchn: profile event channel lock David Vrabel
  2015-06-09 14:59 ` [PATCHv1 2/6] evtchn: factor out freeing an event channel David Vrabel
@ 2015-06-09 14:59 ` David Vrabel
  2015-06-09 15:08   ` Andrew Cooper
  2015-06-09 14:59 ` [PATCHv1 4/6] evtchn: use a per-event channel lock for sending events David Vrabel
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 12+ messages in thread
From: David Vrabel @ 2015-06-09 14:59 UTC (permalink / raw
  To: xen-devel
  Cc: Keir Fraser, Tim Deegan, David Vrabel, Jan Beulich, Ian Campbell

By keeping a count of the number of currently valid event channels,
port_is_valid() can be simplified.

d->valid_evtchns can also be tested without holding d->event_lock which
will be useful later on.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 xen/common/event_channel.c |    3 +++
 xen/include/xen/event.h    |    4 +---
 xen/include/xen/sched.h    |    5 +++--
 3 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index e4ade17..482c3ac 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -189,6 +189,8 @@ static int get_free_port(struct domain *d)
         return -ENOMEM;
     bucket_from_port(d, port) = chn;
 
+    atomic_add(EVTCHNS_PER_BUCKET, &d->valid_evtchns);
+
     return port;
 }
 
@@ -1254,6 +1256,7 @@ int evtchn_init(struct domain *d)
     d->evtchn = alloc_evtchn_bucket(d, 0);
     if ( !d->evtchn )
         return -ENOMEM;
+    atomic_set(&d->valid_evtchns, EVTCHNS_PER_BUCKET);
 
     spin_lock_init_prof(d, event_lock);
     if ( get_free_port(d) != 0 )
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 690f865..77b4d31 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -91,9 +91,7 @@ static inline bool_t port_is_valid(struct domain *d, unsigned int p)
         return 0;
     if ( !d->evtchn )
         return 0;
-    if ( p < EVTCHNS_PER_BUCKET )
-        return 1;
-    return group_from_port(d, p) != NULL && bucket_from_port(d, p) != NULL;
+    return p < atomic_read(&d->valid_evtchns);
 }
 
 static inline struct evtchn *evtchn_from_port(struct domain *d, unsigned int p)
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 80c6f62..fab2e08 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -336,8 +336,9 @@ struct domain
     /* Event channel information. */
     struct evtchn   *evtchn;                         /* first bucket only */
     struct evtchn  **evtchn_group[NR_EVTCHN_GROUPS]; /* all other buckets */
-    unsigned int     max_evtchns;
-    unsigned int     max_evtchn_port;
+    unsigned int     max_evtchns;     /* number supported by ABI */
+    unsigned int     max_evtchn_port; /* max permitted port number */
+    atomic_t         valid_evtchns;   /* number of allocated event channels */
     spinlock_t       event_lock;
     const struct evtchn_port_ops *evtchn_port_ops;
     struct evtchn_fifo_domain *evtchn_fifo;
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv1 4/6] evtchn: use a per-event channel lock for sending events
  2015-06-09 14:59 [PATCHv1 0/6] evtchn: Improve scalebility David Vrabel
                   ` (2 preceding siblings ...)
  2015-06-09 14:59 ` [PATCHv1 3/6] evtchn: simplify port_is_valid() David Vrabel
@ 2015-06-09 14:59 ` David Vrabel
  2015-06-09 15:17   ` Andrew Cooper
  2015-06-09 14:59 ` [PATCHv1 5/6] evtchn: remove the locking when unmasking an event channel David Vrabel
  2015-06-09 14:59 ` [PATCHv1 6/6] evtchn: pad struct evtchn to 64 bytes David Vrabel
  5 siblings, 1 reply; 12+ messages in thread
From: David Vrabel @ 2015-06-09 14:59 UTC (permalink / raw
  To: xen-devel
  Cc: Keir Fraser, Tim Deegan, David Vrabel, Jan Beulich, Ian Campbell

When sending an event, use a new per-event channel lock to safely
validate the event channel state.

This new lock must be held when changing event channel state.

To avoid having to take the remote event channel locks when sending to
an interdomain event channel, the local and remote channel locks are
both held when binding or closing an interdomain event channel.

This significantly increases the number of events that can be sent
from multiple VCPUs.

But, struct evtchn increases in size, reducing the number that fit
into a single page to 64 (instead of 128).

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 xen/common/event_channel.c |   84 +++++++++++++++++++++++++++++++++++++-------
 xen/include/xen/sched.h    |    1 +
 2 files changed, 73 insertions(+), 12 deletions(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 482c3ac..71747d1 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -139,6 +139,7 @@ static struct evtchn *alloc_evtchn_bucket(struct domain *d, unsigned int port)
             return NULL;
         }
         chn[i].port = port + i;
+        spin_lock_init(&chn[i].lock);
     }
     return chn;
 }
@@ -228,11 +229,15 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     if ( rc )
         goto out;
 
+    spin_lock(&chn->lock);
+
     chn->state = ECS_UNBOUND;
     if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
         chn->u.unbound.remote_domid = current->domain->domain_id;
     evtchn_port_init(d, chn);
 
+    spin_unlock(&chn->lock);
+
     alloc->port = port;
 
  out:
@@ -243,6 +248,30 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 }
 
 
+static void double_evtchn_lock(struct domain *ld, struct evtchn *lchn,
+                               struct domain *rd, struct evtchn *rchn)
+{
+    if ( ld < rd || (ld == rd && lchn->port < rchn->port) )
+    {
+        spin_lock(&lchn->lock);
+        spin_lock(&rchn->lock);
+    }
+    else
+    {
+        if ( ld != rd || lchn->port != rchn->port )
+            spin_lock(&rchn->lock);
+        spin_lock(&lchn->lock);
+    }
+}
+
+static void double_evtchn_unlock(struct domain *ld, struct evtchn *lchn,
+                                 struct domain *rd, struct evtchn *rchn)
+{
+    spin_unlock(&lchn->lock);
+    if ( ld != rd || lchn->port != rchn->port )
+        spin_unlock(&rchn->lock);
+}
+
 static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
 {
     struct evtchn *lchn, *rchn;
@@ -285,6 +314,8 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     if ( rc )
         goto out;
 
+    double_evtchn_lock(ld, lchn, rd, rchn);
+
     lchn->u.interdomain.remote_dom  = rd;
     lchn->u.interdomain.remote_port = rport;
     lchn->state                     = ECS_INTERDOMAIN;
@@ -300,6 +331,8 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
      */
     evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
 
+    double_evtchn_unlock(ld, lchn, rd, rchn);
+
     bind->local_port = lport;
 
  out:
@@ -340,11 +373,16 @@ static long evtchn_bind_virq(evtchn_bind_virq_t *bind)
         ERROR_EXIT(port);
 
     chn = evtchn_from_port(d, port);
+
+    spin_lock(&chn->lock);
+
     chn->state          = ECS_VIRQ;
     chn->notify_vcpu_id = vcpu;
     chn->u.virq         = virq;
     evtchn_port_init(d, chn);
 
+    spin_unlock(&chn->lock);
+
     v->virq_to_evtchn[virq] = bind->port = port;
 
  out:
@@ -371,10 +409,15 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
         ERROR_EXIT(port);
 
     chn = evtchn_from_port(d, port);
+
+    spin_lock(&chn->lock);
+
     chn->state          = ECS_IPI;
     chn->notify_vcpu_id = vcpu;
     evtchn_port_init(d, chn);
 
+    spin_unlock(&chn->lock);
+
     bind->port = port;
 
  out:
@@ -449,11 +492,15 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
         goto out;
     }
 
+    spin_lock(&chn->lock);
+
     chn->state  = ECS_PIRQ;
     chn->u.pirq.irq = pirq;
     link_pirq_port(port, chn, v);
     evtchn_port_init(d, chn);
 
+    spin_unlock(&chn->lock);
+
     bind->port = port;
 
 #ifdef CONFIG_X86
@@ -467,7 +514,6 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
     return rc;
 }
 
-
 static long __evtchn_close(struct domain *d1, int port1)
 {
     struct domain *d2 = NULL;
@@ -574,15 +620,24 @@ static long __evtchn_close(struct domain *d1, int port1)
         BUG_ON(chn2->state != ECS_INTERDOMAIN);
         BUG_ON(chn2->u.interdomain.remote_dom != d1);
 
+        double_evtchn_lock(d1, chn1, d2, chn2);
+
+        free_evtchn(d1, chn1);
+
         chn2->state = ECS_UNBOUND;
         chn2->u.unbound.remote_domid = d1->domain_id;
-        break;
+
+        double_evtchn_unlock(d1, chn1, d2, chn2);
+
+        goto out;
 
     default:
         BUG();
     }
 
-    free_evtchn(chn1);
+    spin_lock(&chn1->lock);
+    free_evtchn(d1, chn1);
+    spin_unlock(&chn1->lock);
 
  out:
     if ( d2 != NULL )
@@ -609,21 +664,18 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     struct domain *rd;
     int            rport, ret = 0;
 
-    spin_lock(&ld->event_lock);
-
-    if ( unlikely(!port_is_valid(ld, lport)) )
-    {
-        spin_unlock(&ld->event_lock);
+    if ( unlikely(lport >= atomic_read(&ld->valid_evtchns)) )
         return -EINVAL;
-    }
 
     lchn = evtchn_from_port(ld, lport);
 
+    spin_lock(&lchn->lock);
+
     /* Guest cannot send via a Xen-attached event channel. */
     if ( unlikely(consumer_is_xen(lchn)) )
     {
-        spin_unlock(&ld->event_lock);
-        return -EINVAL;
+        ret = -EINVAL;
+        goto out;
     }
 
     ret = xsm_evtchn_send(XSM_HOOK, ld, lchn);
@@ -652,7 +704,7 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     }
 
 out:
-    spin_unlock(&ld->event_lock);
+    spin_unlock(&lchn->lock);
 
     return ret;
 }
@@ -1163,11 +1215,15 @@ int alloc_unbound_xen_event_channel(
     if ( rc )
         goto out;
 
+    spin_lock(&chn->lock);
+
     chn->state = ECS_UNBOUND;
     chn->xen_consumer = get_xen_consumer(notification_fn);
     chn->notify_vcpu_id = lvcpu;
     chn->u.unbound.remote_domid = remote_domid;
 
+    spin_unlock(&chn->lock);
+
  out:
     spin_unlock(&ld->event_lock);
 
@@ -1214,6 +1270,8 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
     lchn = evtchn_from_port(ld, lport);
     ASSERT(consumer_is_xen(lchn));
 
+    spin_lock(&lchn->lock);
+
     if ( likely(lchn->state == ECS_INTERDOMAIN) )
     {
         rd    = lchn->u.interdomain.remote_dom;
@@ -1221,6 +1279,8 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
         evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
     }
 
+    spin_unlock(&lchn->lock);
+
     spin_unlock(&ld->event_lock);
 }
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index fab2e08..292e28f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -79,6 +79,7 @@ extern domid_t hardware_domid;
 
 struct evtchn
 {
+    spinlock_t lock;
 #define ECS_FREE         0 /* Channel is available for use.                  */
 #define ECS_RESERVED     1 /* Channel is reserved.                           */
 #define ECS_UNBOUND      2 /* Channel is waiting to bind to a remote domain. */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv1 5/6] evtchn: remove the locking when unmasking an event channel
  2015-06-09 14:59 [PATCHv1 0/6] evtchn: Improve scalebility David Vrabel
                   ` (3 preceding siblings ...)
  2015-06-09 14:59 ` [PATCHv1 4/6] evtchn: use a per-event channel lock for sending events David Vrabel
@ 2015-06-09 14:59 ` David Vrabel
  2015-06-09 14:59 ` [PATCHv1 6/6] evtchn: pad struct evtchn to 64 bytes David Vrabel
  5 siblings, 0 replies; 12+ messages in thread
From: David Vrabel @ 2015-06-09 14:59 UTC (permalink / raw
  To: xen-devel
  Cc: Keir Fraser, Tim Deegan, David Vrabel, Jan Beulich, Ian Campbell

The event channel lock is no longer required to check if the port is
valid.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 xen/common/event_channel.c |    6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 71747d1..7ab0516 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -979,9 +979,7 @@ int evtchn_unmask(unsigned int port)
     struct domain *d = current->domain;
     struct evtchn *evtchn;
 
-    ASSERT(spin_is_locked(&d->event_lock));
-
-    if ( unlikely(!port_is_valid(d, port)) )
+    if ( port >= atomic_read(&d->valid_evtchns) )
         return -EINVAL;
 
     evtchn = evtchn_from_port(d, port);
@@ -1147,9 +1145,7 @@ long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         struct evtchn_unmask unmask;
         if ( copy_from_guest(&unmask, arg, 1) != 0 )
             return -EFAULT;
-        spin_lock(&current->domain->event_lock);
         rc = evtchn_unmask(unmask.port);
-        spin_unlock(&current->domain->event_lock);
         break;
     }
 
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv1 6/6] evtchn: pad struct evtchn to 64 bytes
  2015-06-09 14:59 [PATCHv1 0/6] evtchn: Improve scalebility David Vrabel
                   ` (4 preceding siblings ...)
  2015-06-09 14:59 ` [PATCHv1 5/6] evtchn: remove the locking when unmasking an event channel David Vrabel
@ 2015-06-09 14:59 ` David Vrabel
  2015-06-10  9:04   ` Jan Beulich
  5 siblings, 1 reply; 12+ messages in thread
From: David Vrabel @ 2015-06-09 14:59 UTC (permalink / raw
  To: xen-devel
  Cc: Keir Fraser, Tim Deegan, David Vrabel, Jan Beulich, Ian Campbell

The number of struct evtchn in a page must be a power of two.  Under
some workloads performance is improved slightly by padding struct
evtchn to 64 bytes (a cache line), thus putting the per-channel locks
into their own cache line.

This does not decrease the number of struct evtchn's per-page.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
I'm not sure we actually want to do this.  I think it would be better
to pack the struct evtchn and use vmap to turn the pages into a linear
array for quicker lookup.
---
 xen/include/xen/sched.h |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 292e28f..0749c43 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -110,8 +110,8 @@ struct evtchn
     u8 priority;
     u8 last_priority;
     u16 last_vcpu_id;
-#ifdef XSM_ENABLE
     union {
+#ifdef XSM_ENABLE
 #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
         /*
          * If an XSM module needs more space for its event channel context,
@@ -127,8 +127,9 @@ struct evtchn
          */
         u32 flask_sid;
 #endif
-    } ssid;
 #endif
+        u8 __pad[24]; /* Round size to power of two. */
+    } ssid;
 };
 
 int  evtchn_init(struct domain *d); /* from domain_create */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCHv1 3/6] evtchn: simplify port_is_valid()
  2015-06-09 14:59 ` [PATCHv1 3/6] evtchn: simplify port_is_valid() David Vrabel
@ 2015-06-09 15:08   ` Andrew Cooper
  2015-06-09 15:16     ` David Vrabel
  0 siblings, 1 reply; 12+ messages in thread
From: Andrew Cooper @ 2015-06-09 15:08 UTC (permalink / raw
  To: David Vrabel, xen-devel
  Cc: Tim Deegan, Keir Fraser, Ian Campbell, Jan Beulich

On 09/06/15 15:59, David Vrabel wrote:
> By keeping a count of the number of currently valid event channels,
> port_is_valid() can be simplified.
>
> d->valid_evtchns can also be tested without holding d->event_lock which
> will be useful later on.
>
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  xen/common/event_channel.c |    3 +++
>  xen/include/xen/event.h    |    4 +---
>  xen/include/xen/sched.h    |    5 +++--
>  3 files changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
> index e4ade17..482c3ac 100644
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -189,6 +189,8 @@ static int get_free_port(struct domain *d)
>          return -ENOMEM;
>      bucket_from_port(d, port) = chn;
>  
> +    atomic_add(EVTCHNS_PER_BUCKET, &d->valid_evtchns);
> +
>      return port;
>  }
>  
> @@ -1254,6 +1256,7 @@ int evtchn_init(struct domain *d)
>      d->evtchn = alloc_evtchn_bucket(d, 0);
>      if ( !d->evtchn )
>          return -ENOMEM;
> +    atomic_set(&d->valid_evtchns, EVTCHNS_PER_BUCKET);
>  
>      spin_lock_init_prof(d, event_lock);
>      if ( get_free_port(d) != 0 )
> diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
> index 690f865..77b4d31 100644
> --- a/xen/include/xen/event.h
> +++ b/xen/include/xen/event.h
> @@ -91,9 +91,7 @@ static inline bool_t port_is_valid(struct domain *d, unsigned int p)
>          return 0;
>      if ( !d->evtchn )
>          return 0;
> -    if ( p < EVTCHNS_PER_BUCKET )
> -        return 1;
> -    return group_from_port(d, p) != NULL && bucket_from_port(d, p) != NULL;
> +    return p < atomic_read(&d->valid_evtchns);
>  }
>  
>  static inline struct evtchn *evtchn_from_port(struct domain *d, unsigned int p)
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 80c6f62..fab2e08 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -336,8 +336,9 @@ struct domain
>      /* Event channel information. */
>      struct evtchn   *evtchn;                         /* first bucket only */
>      struct evtchn  **evtchn_group[NR_EVTCHN_GROUPS]; /* all other buckets */
> -    unsigned int     max_evtchns;
> -    unsigned int     max_evtchn_port;
> +    unsigned int     max_evtchns;     /* number supported by ABI */
> +    unsigned int     max_evtchn_port; /* max permitted port number */
> +    atomic_t         valid_evtchns;   /* number of allocated event channels */

atomic_t contains a signed integer.  You probably want a BUILD_BUG_ON()
if any ABI maximum value exceeds INT_MAX.

~Andrew

>      spinlock_t       event_lock;
>      const struct evtchn_port_ops *evtchn_port_ops;
>      struct evtchn_fifo_domain *evtchn_fifo;

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv1 3/6] evtchn: simplify port_is_valid()
  2015-06-09 15:08   ` Andrew Cooper
@ 2015-06-09 15:16     ` David Vrabel
  0 siblings, 0 replies; 12+ messages in thread
From: David Vrabel @ 2015-06-09 15:16 UTC (permalink / raw
  To: Andrew Cooper, David Vrabel, xen-devel
  Cc: Keir Fraser, Tim Deegan, Ian Campbell, Jan Beulich

On 09/06/15 16:08, Andrew Cooper wrote:
> On 09/06/15 15:59, David Vrabel wrote:
>> By keeping a count of the number of currently valid event channels,
>> port_is_valid() can be simplified.
>>
>> d->valid_evtchns can also be tested without holding d->event_lock which
>> will be useful later on.
[...]
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -336,8 +336,9 @@ struct domain
>>      /* Event channel information. */
>>      struct evtchn   *evtchn;                         /* first bucket only */
>>      struct evtchn  **evtchn_group[NR_EVTCHN_GROUPS]; /* all other buckets */
>> -    unsigned int     max_evtchns;
>> -    unsigned int     max_evtchn_port;
>> +    unsigned int     max_evtchns;     /* number supported by ABI */
>> +    unsigned int     max_evtchn_port; /* max permitted port number */
>> +    atomic_t         valid_evtchns;   /* number of allocated event channels */
> 
> atomic_t contains a signed integer.  You probably want a BUILD_BUG_ON()
> if any ABI maximum value exceeds INT_MAX.

Probably better to use unsigned int and read_atomic().  All the updates
to valid_evtchns are done while holding d->event_lock.

David

ps. please trim replies.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv1 4/6] evtchn: use a per-event channel lock for sending events
  2015-06-09 14:59 ` [PATCHv1 4/6] evtchn: use a per-event channel lock for sending events David Vrabel
@ 2015-06-09 15:17   ` Andrew Cooper
  2015-06-10  8:59     ` Jan Beulich
  0 siblings, 1 reply; 12+ messages in thread
From: Andrew Cooper @ 2015-06-09 15:17 UTC (permalink / raw
  To: David Vrabel, xen-devel
  Cc: Tim Deegan, Keir Fraser, Ian Campbell, Jan Beulich

On 09/06/15 15:59, David Vrabel wrote:
> When sending an event, use a new per-event channel lock to safely
> validate the event channel state.
>
> This new lock must be held when changing event channel state.
>
> To avoid having to take the remote event channel locks when sending to
> an interdomain event channel, the local and remote channel locks are
> both held when binding or closing an interdomain event channel.
>
> This significantly increases the number of events that can be sent
> from multiple VCPUs.
>
> But, struct evtchn increases in size, reducing the number that fit
> into a single page to 64 (instead of 128).
>
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  xen/common/event_channel.c |   84 +++++++++++++++++++++++++++++++++++++-------
>  xen/include/xen/sched.h    |    1 +
>  2 files changed, 73 insertions(+), 12 deletions(-)
>
> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
> index 482c3ac..71747d1 100644
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -139,6 +139,7 @@ static struct evtchn *alloc_evtchn_bucket(struct domain *d, unsigned int port)
>              return NULL;
>          }
>          chn[i].port = port + i;
> +        spin_lock_init(&chn[i].lock);
>      }
>      return chn;
>  }
> @@ -228,11 +229,15 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
>      if ( rc )
>          goto out;
>  
> +    spin_lock(&chn->lock);
> +
>      chn->state = ECS_UNBOUND;
>      if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
>          chn->u.unbound.remote_domid = current->domain->domain_id;
>      evtchn_port_init(d, chn);
>  
> +    spin_unlock(&chn->lock);
> +
>      alloc->port = port;
>  
>   out:
> @@ -243,6 +248,30 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
>  }
>  
>  
> +static void double_evtchn_lock(struct domain *ld, struct evtchn *lchn,
> +                               struct domain *rd, struct evtchn *rchn)
> +{
> +    if ( ld < rd || (ld == rd && lchn->port < rchn->port) )

ld < rd is undefined behaviour as they are not part of the same
singly-allocated object.  It would be better to choose order based on
domid, and looks like the grant code suffers the same issue.

~Andrew

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv1 4/6] evtchn: use a per-event channel lock for sending events
  2015-06-09 15:17   ` Andrew Cooper
@ 2015-06-10  8:59     ` Jan Beulich
  0 siblings, 0 replies; 12+ messages in thread
From: Jan Beulich @ 2015-06-10  8:59 UTC (permalink / raw
  To: Andrew Cooper, David Vrabel
  Cc: xen-devel, Keir Fraser, Ian Campbell, Tim Deegan

>>> On 09.06.15 at 17:17, <andrew.cooper3@citrix.com> wrote:
> On 09/06/15 15:59, David Vrabel wrote:
>> @@ -243,6 +248,30 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
>>  }
>>  
>>  
>> +static void double_evtchn_lock(struct domain *ld, struct evtchn *lchn,
>> +                               struct domain *rd, struct evtchn *rchn)
>> +{
>> +    if ( ld < rd || (ld == rd && lchn->port < rchn->port) )
> 
> ld < rd is undefined behaviour as they are not part of the same
> singly-allocated object.  It would be better to choose order based on
> domid, and looks like the grant code suffers the same issue.

I'm not concerned of undefined behavior here at all (nor in gnttab
code): struct domain * are all page aligned, and hence can be
viewed as elements of a page sized array spanning the entire Xen
heap. What I'd recommend is also making the channel comparison
use the pointers instead of the port numbers. Which then likely
could result in comparing only these two pointers. Or if not, I'd
really like to at least see the struct domain pointers here to get
const qualified.

Jan

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv1 6/6] evtchn: pad struct evtchn to 64 bytes
  2015-06-09 14:59 ` [PATCHv1 6/6] evtchn: pad struct evtchn to 64 bytes David Vrabel
@ 2015-06-10  9:04   ` Jan Beulich
  0 siblings, 0 replies; 12+ messages in thread
From: Jan Beulich @ 2015-06-10  9:04 UTC (permalink / raw
  To: David Vrabel; +Cc: xen-devel, Keir Fraser, Ian Campbell, Tim Deegan

>>> On 09.06.15 at 16:59, <david.vrabel@citrix.com> wrote:
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -110,8 +110,8 @@ struct evtchn
>      u8 priority;
>      u8 last_priority;
>      u16 last_vcpu_id;
> -#ifdef XSM_ENABLE
>      union {
> +#ifdef XSM_ENABLE
>  #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
>          /*
>           * If an XSM module needs more space for its event channel context,
> @@ -127,8 +127,9 @@ struct evtchn
>           */
>          u32 flask_sid;
>  #endif
> -    } ssid;
>  #endif
> +        u8 __pad[24]; /* Round size to power of two. */

The union in the middle differing in size between 32- and 64-bit
architectures, can this really be a uniform value? Wouldn't it be
better to simply mark the whole structured aligned(64)?

Also I don't see the need for an underscore here, not to speak of
even two of them.

Jan

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2015-06-10  9:04 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-06-09 14:59 [PATCHv1 0/6] evtchn: Improve scalebility David Vrabel
2015-06-09 14:59 ` [PATCHv1 1/6] evtchn: profile event channel lock David Vrabel
2015-06-09 14:59 ` [PATCHv1 2/6] evtchn: factor out freeing an event channel David Vrabel
2015-06-09 14:59 ` [PATCHv1 3/6] evtchn: simplify port_is_valid() David Vrabel
2015-06-09 15:08   ` Andrew Cooper
2015-06-09 15:16     ` David Vrabel
2015-06-09 14:59 ` [PATCHv1 4/6] evtchn: use a per-event channel lock for sending events David Vrabel
2015-06-09 15:17   ` Andrew Cooper
2015-06-10  8:59     ` Jan Beulich
2015-06-09 14:59 ` [PATCHv1 5/6] evtchn: remove the locking when unmasking an event channel David Vrabel
2015-06-09 14:59 ` [PATCHv1 6/6] evtchn: pad struct evtchn to 64 bytes David Vrabel
2015-06-10  9:04   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.