All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* mlx5 + SRP: max_qp_sz mismatch
@ 2014-08-18 23:20 Mark Lehrer
       [not found] ` <CADvaNzWGThMx1gMYfheFVwHzk10+8KWmAPkYZ1_FxArhdmri8Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Mark Lehrer @ 2014-08-18 23:20 UTC (permalink / raw
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA

I have a client machine that is trying to establish an SRP connection,
and it is failing due to an ENOMEM memory allocation error.  I traced
it down to the max_qp_sz field -- the mlx5 limits this to 16384 but
the request wants 32768.

I spent some time trying to figure out how this limit is set, but it
isn't quite obvious.  Is there a driver parameter I can set, or a hard
coded limit somewhere?

I'm using Ubuntu 14.04 and targetcli on the target side, Windows
2008r2 and WinOFED on the client side.

Thanks,
Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mlx5 + SRP: max_qp_sz mismatch
       [not found] ` <CADvaNzWGThMx1gMYfheFVwHzk10+8KWmAPkYZ1_FxArhdmri8Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-08-19 14:50   ` Sagi Grimberg
       [not found]     ` <53F36420.3060607-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Sagi Grimberg @ 2014-08-19 14:50 UTC (permalink / raw
  To: Mark Lehrer, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On 8/19/2014 2:20 AM, Mark Lehrer wrote:
> I have a client machine that is trying to establish an SRP connection,
> and it is failing due to an ENOMEM memory allocation error.  I traced
> it down to the max_qp_sz field -- the mlx5 limits this to 16384 but
> the request wants 32768.
>
> I spent some time trying to figure out how this limit is set, but it
> isn't quite obvious.  Is there a driver parameter I can set, or a hard
> coded limit somewhere?
>
> I'm using Ubuntu 14.04 and targetcli on the target side, Windows
> 2008r2 and WinOFED on the client side.
>

Hi Mark,

I think the issue here is that the SRP target asks for srp_sq_size 
(default 4096) to allocate room for send WRs, but it also asks for
SRPT_DEF_SG_PER_WQE(16) to allocate room for max_send_sge which
generally makes the work queue entries bigger as they come inline. That
probably exceeds the mlx5 max send queue size supported...

It is strange that mlx5 driver is not able to fit same send queue
lengths as mlx4... Probably work queue entries are slightly bigger (I
can check that - and I will).

I just wander how can the ULP requesting the space reservations in the
send queue know that, and if it should know that at all...

 From what I see srp_sq_size is controlled via configfs. Can you set
it to 2048 just for the sake of confirmation this is indeed the issue?

Thanks,
Sagi.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mlx5 + SRP: max_qp_sz mismatch
       [not found]     ` <53F36420.3060607-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
@ 2014-08-19 17:01       ` Mark Lehrer
       [not found]         ` <CADvaNzVkKuvHw58meQD962VSm4Y-QZoHC3ndsYSBW8rw7S68cw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Mark Lehrer @ 2014-08-19 17:01 UTC (permalink / raw
  To: Sagi Grimberg; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

> From what I see srp_sq_size is controlled via
> configfs. Can you set it to 2048 just for the sake of
> confirmation this is indeed the issue?

Yes!  This setting allowed the two machines to establish an SRP session.

I'll try some I/O tests to see how well it works.

Thanks,
Mark


On Tue, Aug 19, 2014 at 8:50 AM, Sagi Grimberg <sagig-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org> wrote:
> On 8/19/2014 2:20 AM, Mark Lehrer wrote:
>>
>> I have a client machine that is trying to establish an SRP connection,
>> and it is failing due to an ENOMEM memory allocation error.  I traced
>> it down to the max_qp_sz field -- the mlx5 limits this to 16384 but
>> the request wants 32768.
>>
>> I spent some time trying to figure out how this limit is set, but it
>> isn't quite obvious.  Is there a driver parameter I can set, or a hard
>> coded limit somewhere?
>>
>> I'm using Ubuntu 14.04 and targetcli on the target side, Windows
>> 2008r2 and WinOFED on the client side.
>>
>
> Hi Mark,
>
> I think the issue here is that the SRP target asks for srp_sq_size (default
> 4096) to allocate room for send WRs, but it also asks for
> SRPT_DEF_SG_PER_WQE(16) to allocate room for max_send_sge which
> generally makes the work queue entries bigger as they come inline. That
> probably exceeds the mlx5 max send queue size supported...
>
> It is strange that mlx5 driver is not able to fit same send queue
> lengths as mlx4... Probably work queue entries are slightly bigger (I
> can check that - and I will).
>
> I just wander how can the ULP requesting the space reservations in the
> send queue know that, and if it should know that at all...
>
> From what I see srp_sq_size is controlled via configfs. Can you set
> it to 2048 just for the sake of confirmation this is indeed the issue?
>
> Thanks,
> Sagi.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mlx5 + SRP: max_qp_sz mismatch
       [not found]         ` <CADvaNzVkKuvHw58meQD962VSm4Y-QZoHC3ndsYSBW8rw7S68cw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-08-20 15:51           ` Mark Lehrer
       [not found]             ` <CADvaNzWAZiNiwBAVYVA5it7K3A4CAozkeHn8SowYNT7iE6YykA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Mark Lehrer @ 2014-08-20 15:51 UTC (permalink / raw
  To: Sagi Grimberg; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

> From what I see srp_sq_size is controlled via
> configfs. Can you set it to 2048 just for the sake of
> confirmation this is indeed the issue?

Now that we have confirmed that this works, what is the proper way to
fix the problem?  Is the core issue with mlx5 or srp?

Thanks,
Mark


On Tue, Aug 19, 2014 at 11:01 AM, Mark Lehrer <lehrer-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>> From what I see srp_sq_size is controlled via
>> configfs. Can you set it to 2048 just for the sake of
>> confirmation this is indeed the issue?
>
> Yes!  This setting allowed the two machines to establish an SRP session.
>
> I'll try some I/O tests to see how well it works.
>
> Thanks,
> Mark
>
>
> On Tue, Aug 19, 2014 at 8:50 AM, Sagi Grimberg <sagig-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org> wrote:
>> On 8/19/2014 2:20 AM, Mark Lehrer wrote:
>>>
>>> I have a client machine that is trying to establish an SRP connection,
>>> and it is failing due to an ENOMEM memory allocation error.  I traced
>>> it down to the max_qp_sz field -- the mlx5 limits this to 16384 but
>>> the request wants 32768.
>>>
>>> I spent some time trying to figure out how this limit is set, but it
>>> isn't quite obvious.  Is there a driver parameter I can set, or a hard
>>> coded limit somewhere?
>>>
>>> I'm using Ubuntu 14.04 and targetcli on the target side, Windows
>>> 2008r2 and WinOFED on the client side.
>>>
>>
>> Hi Mark,
>>
>> I think the issue here is that the SRP target asks for srp_sq_size (default
>> 4096) to allocate room for send WRs, but it also asks for
>> SRPT_DEF_SG_PER_WQE(16) to allocate room for max_send_sge which
>> generally makes the work queue entries bigger as they come inline. That
>> probably exceeds the mlx5 max send queue size supported...
>>
>> It is strange that mlx5 driver is not able to fit same send queue
>> lengths as mlx4... Probably work queue entries are slightly bigger (I
>> can check that - and I will).
>>
>> I just wander how can the ULP requesting the space reservations in the
>> send queue know that, and if it should know that at all...
>>
>> From what I see srp_sq_size is controlled via configfs. Can you set
>> it to 2048 just for the sake of confirmation this is indeed the issue?
>>
>> Thanks,
>> Sagi.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mlx5 + SRP: max_qp_sz mismatch
       [not found]             ` <CADvaNzWAZiNiwBAVYVA5it7K3A4CAozkeHn8SowYNT7iE6YykA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-08-26 16:10               ` Sagi Grimberg
       [not found]                 ` <53FCB18F.5030608-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Sagi Grimberg @ 2014-08-26 16:10 UTC (permalink / raw
  To: Mark Lehrer; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Bart Van Assche, Eli Cohen

On 8/20/2014 6:51 PM, Mark Lehrer wrote:
>>  From what I see srp_sq_size is controlled via
>> configfs. Can you set it to 2048 just for the sake of
>> confirmation this is indeed the issue?
>
> Now that we have confirmed that this works, what is the proper way to
> fix the problem?  Is the core issue with mlx5 or srp?
>
> Thanks,
> Mark
>

Since I don't know how true send queue size can be computed from the
device capabilities at the moment -I can suggest a fix to srpt to retry
with srp_sq_size/2 (ans so on until it succeeds...)

CC'ing Bart and Eli.

Sagi.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mlx5 + SRP: max_qp_sz mismatch
       [not found]                 ` <53FCB18F.5030608-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
@ 2014-08-27 11:19                   ` Bart Van Assche
       [not found]                     ` <53FDBED2.2060707-HInyCGIudOg@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Bart Van Assche @ 2014-08-27 11:19 UTC (permalink / raw
  To: Sagi Grimberg, Mark Lehrer; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Eli Cohen

On 08/26/14 18:10, Sagi Grimberg wrote:
> On 8/20/2014 6:51 PM, Mark Lehrer wrote:
>>>  From what I see srp_sq_size is controlled via
>>> configfs. Can you set it to 2048 just for the sake of
>>> confirmation this is indeed the issue?
>>
>> Now that we have confirmed that this works, what is the proper way to
>> fix the problem?  Is the core issue with mlx5 or srp?
> 
> Since I don't know how true send queue size can be computed from the
> device capabilities at the moment -I can suggest a fix to srpt to retry
> with srp_sq_size/2 (ans so on until it succeeds...)
> 
> CC'ing Bart and Eli.

Sounds fine to me. I wish a more elegant approach would be possible but
I have not yet found an approach that is more elegant.

Bart.

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: mlx5 + SRP: max_qp_sz mismatch
       [not found]                     ` <53FDBED2.2060707-HInyCGIudOg@public.gmane.org>
@ 2014-08-27 11:28                       ` Eli Cohen
       [not found]                         ` <CFE9BFE80FFE4D4892AA5D31387E310F014FC414E5-LSMZvP3E4uyuSA5JZHE7gA@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Eli Cohen @ 2014-08-27 11:28 UTC (permalink / raw
  To: Bart Van Assche, Sagi Grimberg, Mark Lehrer
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

On 08/26/14 18:10, Sagi Grimberg wrote:
> 
> Since I don't know how true send queue size can be computed from the 
> device capabilities at the moment -I can suggest a fix to srpt to 
> retry with srp_sq_size/2 (ans so on until it succeeds...)
> 
The device capabilities provide the maximum number of send work
requests that the device supports but the actual number of work
requests that can be supported in a specific case depends on other
characteristics of the work requests. For example, in the case of
Connect-IB, the actual number depends on the number of s/g entries,
the transport type, etc. This is in compliance with the IB spec:

11.2.1.2 QUERY HCA
Description:
Returns the attributes for the specified HCA.
The maximum values defined in this section are guaranteed
not-to-exceed values. It is possible for an implementation to allocate
some HCA resources from the same space. In that case, the maximum
values returned are not guaranteed for all of those resources
simultaneously.

So, a well written application should try smaller values if it fails
with ENOMEM.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mlx5 + SRP: max_qp_sz mismatch
       [not found]                         ` <CFE9BFE80FFE4D4892AA5D31387E310F014FC414E5-LSMZvP3E4uyuSA5JZHE7gA@public.gmane.org>
@ 2014-08-28 15:58                           ` Bart Van Assche
       [not found]                             ` <53FF51BE.1080103-HInyCGIudOg@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: Bart Van Assche @ 2014-08-28 15:58 UTC (permalink / raw
  To: Mark Lehrer
  Cc: Eli Cohen, Sagi Grimberg,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

On 08/27/14 13:28, Eli Cohen wrote:
> On 08/26/14 18:10, Sagi Grimberg wrote:
>>
>> Since I don't know how true send queue size can be computed from the 
>> device capabilities at the moment -I can suggest a fix to srpt to 
>> retry with srp_sq_size/2 (ans so on until it succeeds...)
>>
> The device capabilities provide the maximum number of send work
> requests that the device supports but the actual number of work
> requests that can be supported in a specific case depends on other
> characteristics of the work requests. For example, in the case of
> Connect-IB, the actual number depends on the number of s/g entries,
> the transport type, etc. This is in compliance with the IB spec:
> 
> 11.2.1.2 QUERY HCA
> Description:
> Returns the attributes for the specified HCA.
> The maximum values defined in this section are guaranteed
> not-to-exceed values. It is possible for an implementation to allocate
> some HCA resources from the same space. In that case, the maximum
> values returned are not guaranteed for all of those resources
> simultaneously.
> 
> So, a well written application should try smaller values if it fails
> with ENOMEM.
 
Hello Mark,

It would help if you could test the patch below. Sorry but I don't
have access to a ConnectIB setup myself.

Thanks,

Bart.

Reported-by: Mark Lehrer <lehrer-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Signed-off-by: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org>
---
 drivers/infiniband/ulp/srpt/ib_srpt.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index fe09f27..3ffaf4e 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -2091,6 +2091,7 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch)
 	if (!qp_init)
 		goto out;
 
+retry:
 	ch->cq = ib_create_cq(sdev->device, srpt_completion, NULL, ch,
 			      ch->rq_size + srp_sq_size, 0);
 	if (IS_ERR(ch->cq)) {
@@ -2114,6 +2115,13 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch)
 	ch->qp = ib_create_qp(sdev->pd, qp_init);
 	if (IS_ERR(ch->qp)) {
 		ret = PTR_ERR(ch->qp);
+		if (ret == -ENOMEM) {
+			srp_sq_size /= 2;
+			if (srp_sq_size >= MIN_SRPT_SQ_SIZE) {
+				ib_destroy_cq(ch->cq);
+				goto retry;
+			}
+		}
 		printk(KERN_ERR "failed to create_qp ret= %d\n", ret);
 		goto err_destroy_cq;
 	}
-- 
1.8.4.5


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: mlx5 + SRP: max_qp_sz mismatch
       [not found]                             ` <53FF51BE.1080103-HInyCGIudOg@public.gmane.org>
@ 2014-09-02 17:36                               ` Mark Lehrer
  0 siblings, 0 replies; 9+ messages in thread
From: Mark Lehrer @ 2014-09-02 17:36 UTC (permalink / raw
  To: Bart Van Assche
  Cc: Eli Cohen, Sagi Grimberg,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

Just got back from a few days in Denver, I'll give it a try ASAP.

We also have a ton of ConnectX-3's and a few ConnectX-2's.  I'll give
it a quick try on those too just for fun.  And if anyone ever needs to
test something against one of these (and the test isn't prohibitively
difficult to set up)  I would be happy to give it a try.

Thanks,
Mark


On Thu, Aug 28, 2014 at 9:58 AM, Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org> wrote:
> On 08/27/14 13:28, Eli Cohen wrote:
>> On 08/26/14 18:10, Sagi Grimberg wrote:
>>>
>>> Since I don't know how true send queue size can be computed from the
>>> device capabilities at the moment -I can suggest a fix to srpt to
>>> retry with srp_sq_size/2 (ans so on until it succeeds...)
>>>
>> The device capabilities provide the maximum number of send work
>> requests that the device supports but the actual number of work
>> requests that can be supported in a specific case depends on other
>> characteristics of the work requests. For example, in the case of
>> Connect-IB, the actual number depends on the number of s/g entries,
>> the transport type, etc. This is in compliance with the IB spec:
>>
>> 11.2.1.2 QUERY HCA
>> Description:
>> Returns the attributes for the specified HCA.
>> The maximum values defined in this section are guaranteed
>> not-to-exceed values. It is possible for an implementation to allocate
>> some HCA resources from the same space. In that case, the maximum
>> values returned are not guaranteed for all of those resources
>> simultaneously.
>>
>> So, a well written application should try smaller values if it fails
>> with ENOMEM.
>
> Hello Mark,
>
> It would help if you could test the patch below. Sorry but I don't
> have access to a ConnectIB setup myself.
>
> Thanks,
>
> Bart.
>
> Reported-by: Mark Lehrer <lehrer-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Signed-off-by: Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org>
> ---
>  drivers/infiniband/ulp/srpt/ib_srpt.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
> index fe09f27..3ffaf4e 100644
> --- a/drivers/infiniband/ulp/srpt/ib_srpt.c
> +++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
> @@ -2091,6 +2091,7 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch)
>         if (!qp_init)
>                 goto out;
>
> +retry:
>         ch->cq = ib_create_cq(sdev->device, srpt_completion, NULL, ch,
>                               ch->rq_size + srp_sq_size, 0);
>         if (IS_ERR(ch->cq)) {
> @@ -2114,6 +2115,13 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch)
>         ch->qp = ib_create_qp(sdev->pd, qp_init);
>         if (IS_ERR(ch->qp)) {
>                 ret = PTR_ERR(ch->qp);
> +               if (ret == -ENOMEM) {
> +                       srp_sq_size /= 2;
> +                       if (srp_sq_size >= MIN_SRPT_SQ_SIZE) {
> +                               ib_destroy_cq(ch->cq);
> +                               goto retry;
> +                       }
> +               }
>                 printk(KERN_ERR "failed to create_qp ret= %d\n", ret);
>                 goto err_destroy_cq;
>         }
> --
> 1.8.4.5
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2014-09-02 17:36 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-08-18 23:20 mlx5 + SRP: max_qp_sz mismatch Mark Lehrer
     [not found] ` <CADvaNzWGThMx1gMYfheFVwHzk10+8KWmAPkYZ1_FxArhdmri8Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-08-19 14:50   ` Sagi Grimberg
     [not found]     ` <53F36420.3060607-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2014-08-19 17:01       ` Mark Lehrer
     [not found]         ` <CADvaNzVkKuvHw58meQD962VSm4Y-QZoHC3ndsYSBW8rw7S68cw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-08-20 15:51           ` Mark Lehrer
     [not found]             ` <CADvaNzWAZiNiwBAVYVA5it7K3A4CAozkeHn8SowYNT7iE6YykA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-08-26 16:10               ` Sagi Grimberg
     [not found]                 ` <53FCB18F.5030608-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2014-08-27 11:19                   ` Bart Van Assche
     [not found]                     ` <53FDBED2.2060707-HInyCGIudOg@public.gmane.org>
2014-08-27 11:28                       ` Eli Cohen
     [not found]                         ` <CFE9BFE80FFE4D4892AA5D31387E310F014FC414E5-LSMZvP3E4uyuSA5JZHE7gA@public.gmane.org>
2014-08-28 15:58                           ` Bart Van Assche
     [not found]                             ` <53FF51BE.1080103-HInyCGIudOg@public.gmane.org>
2014-09-02 17:36                               ` Mark Lehrer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.