All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] vmxnet3: prevent receive getting out of sequence on napi poll
@ 2015-07-07 18:02 Neil Horman
  2015-07-07 18:10 ` Andy Gospodarek
  2015-07-09  6:37 ` David Miller
  0 siblings, 2 replies; 4+ messages in thread
From: Neil Horman @ 2015-07-07 18:02 UTC (permalink / raw
  To: netdev; +Cc: Neil Horman, Shreyas Bhatewara, David S. Miller

vmxnet3's current napi path is built to count every rx descriptor we recieve,
and use that as a count of the napi budget.  That means its possible to return
from a napi poll halfway through recieving a fragmented packet accross multiple
dma descriptors.  If that happens, the next napi poll will start with the
descriptor ring in an improper state (e.g. the first descriptor we look at may
have the end-of-packet bit set), which will cause a BUG halt in the driver.

Fix the issue by only counting whole received packets in the napi poll and
returning that value, rather than the descriptor count.

Tested by the reporter and myself, successfully

Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Shreyas Bhatewara <sbhatewara@vmware.com>
CC: "David S. Miller" <davem@davemloft.net>
---
 drivers/net/vmxnet3/vmxnet3_drv.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
index da11bb5..46f4cad 100644
--- a/drivers/net/vmxnet3/vmxnet3_drv.c
+++ b/drivers/net/vmxnet3/vmxnet3_drv.c
@@ -1216,7 +1216,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
 	static const u32 rxprod_reg[2] = {
 		VMXNET3_REG_RXPROD, VMXNET3_REG_RXPROD2
 	};
-	u32 num_rxd = 0;
+	u32 num_pkts = 0;
 	bool skip_page_frags = false;
 	struct Vmxnet3_RxCompDesc *rcd;
 	struct vmxnet3_rx_ctx *ctx = &rq->rx_ctx;
@@ -1235,13 +1235,12 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
 		struct Vmxnet3_RxDesc *rxd;
 		u32 idx, ring_idx;
 		struct vmxnet3_cmd_ring	*ring = NULL;
-		if (num_rxd >= quota) {
+		if (num_pkts >= quota) {
 			/* we may stop even before we see the EOP desc of
 			 * the current pkt
 			 */
 			break;
 		}
-		num_rxd++;
 		BUG_ON(rcd->rqID != rq->qid && rcd->rqID != rq->qid2);
 		idx = rcd->rxdIdx;
 		ring_idx = rcd->rqID < adapter->num_rx_queues ? 0 : 1;
@@ -1413,6 +1412,7 @@ not_lro:
 				napi_gro_receive(&rq->napi, skb);
 
 			ctx->skb = NULL;
+			num_pkts++;
 		}
 
 rcd_done:
@@ -1443,7 +1443,7 @@ rcd_done:
 				  &rq->comp_ring.base[rq->comp_ring.next2proc].rcd, &rxComp);
 	}
 
-	return num_rxd;
+	return num_pkts;
 }
 
 
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] vmxnet3: prevent receive getting out of sequence on napi poll
  2015-07-07 18:02 [PATCH] vmxnet3: prevent receive getting out of sequence on napi poll Neil Horman
@ 2015-07-07 18:10 ` Andy Gospodarek
  2015-07-07 18:26   ` Neil Horman
  2015-07-09  6:37 ` David Miller
  1 sibling, 1 reply; 4+ messages in thread
From: Andy Gospodarek @ 2015-07-07 18:10 UTC (permalink / raw
  To: Neil Horman; +Cc: netdev, Shreyas Bhatewara, David S. Miller

On Tue, Jul 07, 2015 at 02:02:18PM -0400, Neil Horman wrote:
> vmxnet3's current napi path is built to count every rx descriptor we recieve,
> and use that as a count of the napi budget.  That means its possible to return
> from a napi poll halfway through recieving a fragmented packet accross multiple
> dma descriptors.  If that happens, the next napi poll will start with the
> descriptor ring in an improper state (e.g. the first descriptor we look at may
> have the end-of-packet bit set), which will cause a BUG halt in the driver.
> 
> Fix the issue by only counting whole received packets in the napi poll and
> returning that value, rather than the descriptor count.
> 
> Tested by the reporter and myself, successfully
> 
> Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
> CC: Shreyas Bhatewara <sbhatewara@vmware.com>
> CC: "David S. Miller" <davem@davemloft.net>

Looks good.  I'm now curious how widespread something like this might be
for drivers that use a similar EOP marker....

Acked-by: Andy Gospodarek <gospo@cumulusnetworks.com>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] vmxnet3: prevent receive getting out of sequence on napi poll
  2015-07-07 18:10 ` Andy Gospodarek
@ 2015-07-07 18:26   ` Neil Horman
  0 siblings, 0 replies; 4+ messages in thread
From: Neil Horman @ 2015-07-07 18:26 UTC (permalink / raw
  To: Andy Gospodarek; +Cc: netdev, Shreyas Bhatewara, David S. Miller

On Tue, Jul 07, 2015 at 02:10:50PM -0400, Andy Gospodarek wrote:
> On Tue, Jul 07, 2015 at 02:02:18PM -0400, Neil Horman wrote:
> > vmxnet3's current napi path is built to count every rx descriptor we recieve,
> > and use that as a count of the napi budget.  That means its possible to return
> > from a napi poll halfway through recieving a fragmented packet accross multiple
> > dma descriptors.  If that happens, the next napi poll will start with the
> > descriptor ring in an improper state (e.g. the first descriptor we look at may
> > have the end-of-packet bit set), which will cause a BUG halt in the driver.
> > 
> > Fix the issue by only counting whole received packets in the napi poll and
> > returning that value, rather than the descriptor count.
> > 
> > Tested by the reporter and myself, successfully
> > 
> > Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
> > CC: Shreyas Bhatewara <sbhatewara@vmware.com>
> > CC: "David S. Miller" <davem@davemloft.net>
> 
> Looks good.  I'm now curious how widespread something like this might be
> for drivers that use a similar EOP marker....
> 
Thats a fair question, though It manifests pretty clearly in any driver that
does any sort of strict state checking.  I think several drivers just punt if
they see an EOP desriptor before an SOP descriptor, toss it and keep going, so
you might loose a few extra frames while the driver drains the queue.


> Acked-by: Andy Gospodarek <gospo@cumulusnetworks.com>
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] vmxnet3: prevent receive getting out of sequence on napi poll
  2015-07-07 18:02 [PATCH] vmxnet3: prevent receive getting out of sequence on napi poll Neil Horman
  2015-07-07 18:10 ` Andy Gospodarek
@ 2015-07-09  6:37 ` David Miller
  1 sibling, 0 replies; 4+ messages in thread
From: David Miller @ 2015-07-09  6:37 UTC (permalink / raw
  To: nhorman; +Cc: netdev, sbhatewara

From: Neil Horman <nhorman@tuxdriver.com>
Date: Tue,  7 Jul 2015 14:02:18 -0400

> vmxnet3's current napi path is built to count every rx descriptor we recieve,
> and use that as a count of the napi budget.  That means its possible to return
> from a napi poll halfway through recieving a fragmented packet accross multiple
> dma descriptors.  If that happens, the next napi poll will start with the
> descriptor ring in an improper state (e.g. the first descriptor we look at may
> have the end-of-packet bit set), which will cause a BUG halt in the driver.
> 
> Fix the issue by only counting whole received packets in the napi poll and
> returning that value, rather than the descriptor count.
> 
> Tested by the reporter and myself, successfully
> 
> Signed-off-by: Neil Horman <nhorman@tuxdriver.com>

Applied, thanks Neil.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-07-09  6:37 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-07 18:02 [PATCH] vmxnet3: prevent receive getting out of sequence on napi poll Neil Horman
2015-07-07 18:10 ` Andy Gospodarek
2015-07-07 18:26   ` Neil Horman
2015-07-09  6:37 ` David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.