From: Jason Xing <kerneljasonxing@gmail.com>
To: Daniel Jurgens <danielj@nvidia.com>
Cc: netdev@vger.kernel.org, mst@redhat.com, jasowang@redhat.com,
xuanzhuo@linux.alibaba.com, virtualization@lists.linux.dev,
davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
pabeni@redhat.com, jiri@nvidia.com
Subject: Re: [PATCH net-next v2 2/2] virtio_net: Add TX stopped and wake counters
Date: Sat, 11 May 2024 11:38:18 +0800 [thread overview]
Message-ID: <CAL+tcoBRqEyr+4NPDiORJgM4nUzYjeHNZJLDo2K=dAJhKWvz_g@mail.gmail.com> (raw)
In-Reply-To: <20240510201927.1821109-3-danielj@nvidia.com>
On Sat, May 11, 2024 at 4:20 AM Daniel Jurgens <danielj@nvidia.com> wrote:
>
> Add a tx queue stop and wake counters, they are useful for debugging.
>
> $ ./tools/net/ynl/cli.py --spec netlink/specs/netdev.yaml \
> --dump qstats-get --json '{"scope": "queue"}'
> ...
> {'ifindex': 13,
> 'queue-id': 0,
> 'queue-type': 'tx',
> 'tx-bytes': 14756682850,
> 'tx-packets': 226465,
> 'tx-stop': 113208,
> 'tx-wake': 113208},
> {'ifindex': 13,
> 'queue-id': 1,
> 'queue-type': 'tx',
> 'tx-bytes': 18167675008,
> 'tx-packets': 278660,
> 'tx-stop': 8632,
> 'tx-wake': 8632}]
>
> Signed-off-by: Daniel Jurgens <danielj@nvidia.com>
> Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Jason Xing <kerneljasonxing@gmail.com>
Thanks!
> ---
> drivers/net/virtio_net.c | 28 ++++++++++++++++++++++++++--
> 1 file changed, 26 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 218a446c4c27..df6121c38a1b 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -95,6 +95,8 @@ struct virtnet_sq_stats {
> u64_stats_t xdp_tx_drops;
> u64_stats_t kicks;
> u64_stats_t tx_timeouts;
> + u64_stats_t stop;
> + u64_stats_t wake;
> };
>
> struct virtnet_rq_stats {
> @@ -145,6 +147,8 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = {
> static const struct virtnet_stat_desc virtnet_sq_stats_desc_qstat[] = {
> VIRTNET_SQ_STAT_QSTAT("packets", packets),
> VIRTNET_SQ_STAT_QSTAT("bytes", bytes),
> + VIRTNET_SQ_STAT_QSTAT("stop", stop),
> + VIRTNET_SQ_STAT_QSTAT("wake", wake),
> };
>
> static const struct virtnet_stat_desc virtnet_rq_stats_desc_qstat[] = {
> @@ -1014,6 +1018,9 @@ static void check_sq_full_and_disable(struct virtnet_info *vi,
> */
> if (sq->vq->num_free < 2+MAX_SKB_FRAGS) {
> netif_stop_subqueue(dev, qnum);
> + u64_stats_update_begin(&sq->stats.syncp);
> + u64_stats_inc(&sq->stats.stop);
> + u64_stats_update_end(&sq->stats.syncp);
> if (use_napi) {
> if (unlikely(!virtqueue_enable_cb_delayed(sq->vq)))
> virtqueue_napi_schedule(&sq->napi, sq->vq);
> @@ -1022,6 +1029,9 @@ static void check_sq_full_and_disable(struct virtnet_info *vi,
> free_old_xmit(sq, false);
> if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) {
> netif_start_subqueue(dev, qnum);
> + u64_stats_update_begin(&sq->stats.syncp);
> + u64_stats_inc(&sq->stats.wake);
> + u64_stats_update_end(&sq->stats.syncp);
> virtqueue_disable_cb(sq->vq);
> }
> }
> @@ -2322,8 +2332,14 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
> free_old_xmit(sq, true);
> } while (unlikely(!virtqueue_enable_cb_delayed(sq->vq)));
>
> - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
> + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) {
> + if (netif_tx_queue_stopped(txq)) {
> + u64_stats_update_begin(&sq->stats.syncp);
> + u64_stats_inc(&sq->stats.wake);
> + u64_stats_update_end(&sq->stats.syncp);
> + }
> netif_tx_wake_queue(txq);
> + }
>
> __netif_tx_unlock(txq);
> }
> @@ -2473,8 +2489,14 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
> virtqueue_disable_cb(sq->vq);
> free_old_xmit(sq, true);
>
> - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
> + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) {
> + if (netif_tx_queue_stopped(txq)) {
> + u64_stats_update_begin(&sq->stats.syncp);
> + u64_stats_inc(&sq->stats.wake);
> + u64_stats_update_end(&sq->stats.syncp);
> + }
> netif_tx_wake_queue(txq);
> + }
>
> opaque = virtqueue_enable_cb_prepare(sq->vq);
>
> @@ -4790,6 +4812,8 @@ static void virtnet_get_base_stats(struct net_device *dev,
>
> tx->bytes = 0;
> tx->packets = 0;
> + tx->stop = 0;
> + tx->wake = 0;
>
> if (vi->device_stats_cap & VIRTIO_NET_STATS_TYPE_TX_BASIC) {
> tx->hw_drops = 0;
> --
> 2.45.0
>
>
next prev parent reply other threads:[~2024-05-11 3:38 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-10 20:19 [PATCH net-next 0/2] Add TX stop/wake counters Daniel Jurgens
2024-05-10 20:19 ` [PATCH net-next v2 1/2] netdev: Add queue stats for TX stop and wake Daniel Jurgens
2024-05-11 3:31 ` Jason Xing
2024-05-10 20:19 ` [PATCH net-next v2 2/2] virtio_net: Add TX stopped and wake counters Daniel Jurgens
2024-05-11 3:38 ` Jason Xing [this message]
2024-05-13 22:50 ` [PATCH net-next 0/2] Add TX stop/wake counters patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAL+tcoBRqEyr+4NPDiORJgM4nUzYjeHNZJLDo2K=dAJhKWvz_g@mail.gmail.com' \
--to=kerneljasonxing@gmail.com \
--cc=danielj@nvidia.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=jasowang@redhat.com \
--cc=jiri@nvidia.com \
--cc=kuba@kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=virtualization@lists.linux.dev \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).