All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next] virtio_net: enable napi_tx by default
@ 2019-06-13 16:24 Willem de Bruijn
  2019-06-14  3:28 ` Jason Wang
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Willem de Bruijn @ 2019-06-13 16:24 UTC (permalink / raw
  To: netdev; +Cc: davem, jasowang, mst, Willem de Bruijn

From: Willem de Bruijn <willemb@google.com>

NAPI tx mode improves TCP behavior by enabling TCP small queues (TSQ).
TSQ reduces queuing ("bufferbloat") and burstiness.

Previous measurements have shown significant improvement for
TCP_STREAM style workloads. Such as those in commit 86a5df1495cc
("Merge branch 'virtio-net-tx-napi'").

There has been uncertainty about smaller possible regressions in
latency due to increased reliance on tx interrupts.

The above results did not show that, nor did I observe this when
rerunning TCP_RR on Linux 5.1 this week on a pair of guests in the
same rack. This may be subject to other settings, notably interrupt
coalescing.

In the unlikely case of regression, we have landed a credible runtime
solution. Ethtool can configure it with -C tx-frames [0|1] as of
commit 0c465be183c7 ("virtio_net: ethtool tx napi configuration").

NAPI tx mode has been the default in Google Container-Optimized OS
(COS) for over half a year, as of release M70 in October 2018,
without any negative reports.

Link: https://marc.info/?l=linux-netdev&m=149305618416472
Link: https://lwn.net/Articles/507065/
Signed-off-by: Willem de Bruijn <willemb@google.com>

---

now that we have ethtool support and real production deployment,
it seemed like a good time to revisit this discussion.

---
 drivers/net/virtio_net.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 0d4115c9e20b..4f3de0ac8b0b 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -26,7 +26,7 @@
 static int napi_weight = NAPI_POLL_WEIGHT;
 module_param(napi_weight, int, 0444);
 
-static bool csum = true, gso = true, napi_tx;
+static bool csum = true, gso = true, napi_tx = true;
 module_param(csum, bool, 0444);
 module_param(gso, bool, 0444);
 module_param(napi_tx, bool, 0644);
-- 
2.22.0.rc2.383.gf4fbbf30c2-goog


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH net-next] virtio_net: enable napi_tx by default
  2019-06-13 16:24 [PATCH net-next] virtio_net: enable napi_tx by default Willem de Bruijn
@ 2019-06-14  3:28 ` Jason Wang
  2019-06-14  6:00   ` Michael S. Tsirkin
  2019-06-14  5:35 ` Michael S. Tsirkin
  2019-06-15  2:34 ` David Miller
  2 siblings, 1 reply; 6+ messages in thread
From: Jason Wang @ 2019-06-14  3:28 UTC (permalink / raw
  To: Willem de Bruijn, netdev; +Cc: davem, mst, Willem de Bruijn


On 2019/6/14 上午12:24, Willem de Bruijn wrote:
> From: Willem de Bruijn <willemb@google.com>
>
> NAPI tx mode improves TCP behavior by enabling TCP small queues (TSQ).
> TSQ reduces queuing ("bufferbloat") and burstiness.
>
> Previous measurements have shown significant improvement for
> TCP_STREAM style workloads. Such as those in commit 86a5df1495cc
> ("Merge branch 'virtio-net-tx-napi'").
>
> There has been uncertainty about smaller possible regressions in
> latency due to increased reliance on tx interrupts.
>
> The above results did not show that, nor did I observe this when
> rerunning TCP_RR on Linux 5.1 this week on a pair of guests in the
> same rack. This may be subject to other settings, notably interrupt
> coalescing.
>
> In the unlikely case of regression, we have landed a credible runtime
> solution. Ethtool can configure it with -C tx-frames [0|1] as of
> commit 0c465be183c7 ("virtio_net: ethtool tx napi configuration").
>
> NAPI tx mode has been the default in Google Container-Optimized OS
> (COS) for over half a year, as of release M70 in October 2018,
> without any negative reports.
>
> Link: https://marc.info/?l=linux-netdev&m=149305618416472
> Link: https://lwn.net/Articles/507065/
> Signed-off-by: Willem de Bruijn <willemb@google.com>
>
> ---
>
> now that we have ethtool support and real production deployment,
> it seemed like a good time to revisit this discussion.


I agree to enable it by default. Need inputs from Michael. One possible 
issue is we may get some regression on the old machine without APICV, 
but consider most modern CPU has this feature, it probably doesn't matter.

Thanks


>
> ---
>   drivers/net/virtio_net.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 0d4115c9e20b..4f3de0ac8b0b 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -26,7 +26,7 @@
>   static int napi_weight = NAPI_POLL_WEIGHT;
>   module_param(napi_weight, int, 0444);
>   
> -static bool csum = true, gso = true, napi_tx;
> +static bool csum = true, gso = true, napi_tx = true;
>   module_param(csum, bool, 0444);
>   module_param(gso, bool, 0444);
>   module_param(napi_tx, bool, 0644);

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-next] virtio_net: enable napi_tx by default
  2019-06-13 16:24 [PATCH net-next] virtio_net: enable napi_tx by default Willem de Bruijn
  2019-06-14  3:28 ` Jason Wang
@ 2019-06-14  5:35 ` Michael S. Tsirkin
  2019-06-15  2:34 ` David Miller
  2 siblings, 0 replies; 6+ messages in thread
From: Michael S. Tsirkin @ 2019-06-14  5:35 UTC (permalink / raw
  To: Willem de Bruijn; +Cc: netdev, davem, jasowang, Willem de Bruijn

On Thu, Jun 13, 2019 at 12:24:57PM -0400, Willem de Bruijn wrote:
> From: Willem de Bruijn <willemb@google.com>
> 
> NAPI tx mode improves TCP behavior by enabling TCP small queues (TSQ).
> TSQ reduces queuing ("bufferbloat") and burstiness.
> 
> Previous measurements have shown significant improvement for
> TCP_STREAM style workloads. Such as those in commit 86a5df1495cc
> ("Merge branch 'virtio-net-tx-napi'").
> 
> There has been uncertainty about smaller possible regressions in
> latency due to increased reliance on tx interrupts.
> 
> The above results did not show that, nor did I observe this when
> rerunning TCP_RR on Linux 5.1 this week on a pair of guests in the
> same rack. This may be subject to other settings, notably interrupt
> coalescing.
> 
> In the unlikely case of regression, we have landed a credible runtime
> solution. Ethtool can configure it with -C tx-frames [0|1] as of
> commit 0c465be183c7 ("virtio_net: ethtool tx napi configuration").
> 
> NAPI tx mode has been the default in Google Container-Optimized OS
> (COS) for over half a year, as of release M70 in October 2018,
> without any negative reports.
> 
> Link: https://marc.info/?l=linux-netdev&m=149305618416472
> Link: https://lwn.net/Articles/507065/
> Signed-off-by: Willem de Bruijn <willemb@google.com>


Acked-by: Michael S. Tsirkin <mst@redhat.com>

> ---
> 
> now that we have ethtool support and real production deployment,
> it seemed like a good time to revisit this discussion.
> 
> ---
>  drivers/net/virtio_net.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 0d4115c9e20b..4f3de0ac8b0b 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -26,7 +26,7 @@
>  static int napi_weight = NAPI_POLL_WEIGHT;
>  module_param(napi_weight, int, 0444);
>  
> -static bool csum = true, gso = true, napi_tx;
> +static bool csum = true, gso = true, napi_tx = true;
>  module_param(csum, bool, 0444);
>  module_param(gso, bool, 0444);
>  module_param(napi_tx, bool, 0644);
> -- 
> 2.22.0.rc2.383.gf4fbbf30c2-goog

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-next] virtio_net: enable napi_tx by default
  2019-06-14  3:28 ` Jason Wang
@ 2019-06-14  6:00   ` Michael S. Tsirkin
  2019-06-14  7:01     ` Jason Wang
  0 siblings, 1 reply; 6+ messages in thread
From: Michael S. Tsirkin @ 2019-06-14  6:00 UTC (permalink / raw
  To: Jason Wang; +Cc: Willem de Bruijn, netdev, davem, Willem de Bruijn

On Fri, Jun 14, 2019 at 11:28:59AM +0800, Jason Wang wrote:
> 
> On 2019/6/14 上午12:24, Willem de Bruijn wrote:
> > From: Willem de Bruijn <willemb@google.com>
> > 
> > NAPI tx mode improves TCP behavior by enabling TCP small queues (TSQ).
> > TSQ reduces queuing ("bufferbloat") and burstiness.
> > 
> > Previous measurements have shown significant improvement for
> > TCP_STREAM style workloads. Such as those in commit 86a5df1495cc
> > ("Merge branch 'virtio-net-tx-napi'").
> > 
> > There has been uncertainty about smaller possible regressions in
> > latency due to increased reliance on tx interrupts.
> > 
> > The above results did not show that, nor did I observe this when
> > rerunning TCP_RR on Linux 5.1 this week on a pair of guests in the
> > same rack. This may be subject to other settings, notably interrupt
> > coalescing.
> > 
> > In the unlikely case of regression, we have landed a credible runtime
> > solution. Ethtool can configure it with -C tx-frames [0|1] as of
> > commit 0c465be183c7 ("virtio_net: ethtool tx napi configuration").
> > 
> > NAPI tx mode has been the default in Google Container-Optimized OS
> > (COS) for over half a year, as of release M70 in October 2018,
> > without any negative reports.
> > 
> > Link: https://marc.info/?l=linux-netdev&m=149305618416472
> > Link: https://lwn.net/Articles/507065/
> > Signed-off-by: Willem de Bruijn <willemb@google.com>
> > 
> > ---
> > 
> > now that we have ethtool support and real production deployment,
> > it seemed like a good time to revisit this discussion.
> 
> 
> I agree to enable it by default. Need inputs from Michael. One possible
> issue is we may get some regression on the old machine without APICV, but
> consider most modern CPU has this feature, it probably doesn't matter.
> 
> Thanks
> 

Right. If the issue does arise we can always add e.g. a feature flag
to control the default from the host.


> > 
> > ---
> >   drivers/net/virtio_net.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index 0d4115c9e20b..4f3de0ac8b0b 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -26,7 +26,7 @@
> >   static int napi_weight = NAPI_POLL_WEIGHT;
> >   module_param(napi_weight, int, 0444);
> > -static bool csum = true, gso = true, napi_tx;
> > +static bool csum = true, gso = true, napi_tx = true;
> >   module_param(csum, bool, 0444);
> >   module_param(gso, bool, 0444);
> >   module_param(napi_tx, bool, 0644);

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-next] virtio_net: enable napi_tx by default
  2019-06-14  6:00   ` Michael S. Tsirkin
@ 2019-06-14  7:01     ` Jason Wang
  0 siblings, 0 replies; 6+ messages in thread
From: Jason Wang @ 2019-06-14  7:01 UTC (permalink / raw
  To: Michael S. Tsirkin; +Cc: Willem de Bruijn, netdev, davem, Willem de Bruijn


On 2019/6/14 下午2:00, Michael S. Tsirkin wrote:
> On Fri, Jun 14, 2019 at 11:28:59AM +0800, Jason Wang wrote:
>> On 2019/6/14 上午12:24, Willem de Bruijn wrote:
>>> From: Willem de Bruijn <willemb@google.com>
>>>
>>> NAPI tx mode improves TCP behavior by enabling TCP small queues (TSQ).
>>> TSQ reduces queuing ("bufferbloat") and burstiness.
>>>
>>> Previous measurements have shown significant improvement for
>>> TCP_STREAM style workloads. Such as those in commit 86a5df1495cc
>>> ("Merge branch 'virtio-net-tx-napi'").
>>>
>>> There has been uncertainty about smaller possible regressions in
>>> latency due to increased reliance on tx interrupts.
>>>
>>> The above results did not show that, nor did I observe this when
>>> rerunning TCP_RR on Linux 5.1 this week on a pair of guests in the
>>> same rack. This may be subject to other settings, notably interrupt
>>> coalescing.
>>>
>>> In the unlikely case of regression, we have landed a credible runtime
>>> solution. Ethtool can configure it with -C tx-frames [0|1] as of
>>> commit 0c465be183c7 ("virtio_net: ethtool tx napi configuration").
>>>
>>> NAPI tx mode has been the default in Google Container-Optimized OS
>>> (COS) for over half a year, as of release M70 in October 2018,
>>> without any negative reports.
>>>
>>> Link: https://marc.info/?l=linux-netdev&m=149305618416472
>>> Link: https://lwn.net/Articles/507065/
>>> Signed-off-by: Willem de Bruijn <willemb@google.com>
>>>
>>> ---
>>>
>>> now that we have ethtool support and real production deployment,
>>> it seemed like a good time to revisit this discussion.
>>
>> I agree to enable it by default. Need inputs from Michael. One possible
>> issue is we may get some regression on the old machine without APICV, but
>> consider most modern CPU has this feature, it probably doesn't matter.
>>
>> Thanks
>>
> Right. If the issue does arise we can always add e.g. a feature flag
> to control the default from the host.
>

Yes.

So

Acked-by: Jason Wang <jasowang@redhat.com>



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-next] virtio_net: enable napi_tx by default
  2019-06-13 16:24 [PATCH net-next] virtio_net: enable napi_tx by default Willem de Bruijn
  2019-06-14  3:28 ` Jason Wang
  2019-06-14  5:35 ` Michael S. Tsirkin
@ 2019-06-15  2:34 ` David Miller
  2 siblings, 0 replies; 6+ messages in thread
From: David Miller @ 2019-06-15  2:34 UTC (permalink / raw
  To: willemdebruijn.kernel; +Cc: netdev, jasowang, mst, willemb

From: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
Date: Thu, 13 Jun 2019 12:24:57 -0400

> From: Willem de Bruijn <willemb@google.com>
> 
> NAPI tx mode improves TCP behavior by enabling TCP small queues (TSQ).
> TSQ reduces queuing ("bufferbloat") and burstiness.
> 
> Previous measurements have shown significant improvement for
> TCP_STREAM style workloads. Such as those in commit 86a5df1495cc
> ("Merge branch 'virtio-net-tx-napi'").
> 
> There has been uncertainty about smaller possible regressions in
> latency due to increased reliance on tx interrupts.
> 
> The above results did not show that, nor did I observe this when
> rerunning TCP_RR on Linux 5.1 this week on a pair of guests in the
> same rack. This may be subject to other settings, notably interrupt
> coalescing.
> 
> In the unlikely case of regression, we have landed a credible runtime
> solution. Ethtool can configure it with -C tx-frames [0|1] as of
> commit 0c465be183c7 ("virtio_net: ethtool tx napi configuration").
> 
> NAPI tx mode has been the default in Google Container-Optimized OS
> (COS) for over half a year, as of release M70 in October 2018,
> without any negative reports.
> 
> Link: https://marc.info/?l=linux-netdev&m=149305618416472
> Link: https://lwn.net/Articles/507065/
> Signed-off-by: Willem de Bruijn <willemb@google.com>

Applied.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-06-15  2:34 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-06-13 16:24 [PATCH net-next] virtio_net: enable napi_tx by default Willem de Bruijn
2019-06-14  3:28 ` Jason Wang
2019-06-14  6:00   ` Michael S. Tsirkin
2019-06-14  7:01     ` Jason Wang
2019-06-14  5:35 ` Michael S. Tsirkin
2019-06-15  2:34 ` David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.