Netdev Archive mirror
 help / color / mirror / Atom feed
From: Mina Almasry <almasrymina@google.com>
To: Alexander Lobakin <aleksander.lobakin@intel.com>
Cc: intel-wired-lan@lists.osuosl.org,
	Tony Nguyen <anthony.l.nguyen@intel.com>,
	 "David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	 Jakub Kicinski <kuba@kernel.org>,
	Paolo Abeni <pabeni@redhat.com>,
	 nex.sw.ncis.osdt.itp.upstreaming@intel.com,
	netdev@vger.kernel.org,  linux-kernel@vger.kernel.org
Subject: Re: [PATCH RFC iwl-next 08/12] idpf: reuse libeth's definitions of parsed ptype structures
Date: Fri, 10 May 2024 09:22:05 -0700	[thread overview]
Message-ID: <CAHS8izO7agxQ6nbc=BoK5KuYd_jgVLgJTbZbmEUqarfVn300Tw@mail.gmail.com> (raw)
In-Reply-To: <20240510152620.2227312-9-aleksander.lobakin@intel.com>

On Fri, May 10, 2024 at 8:30 AM Alexander Lobakin
<aleksander.lobakin@intel.com> wrote:
>
> idpf's in-kernel parsed ptype structure is almost identical to the one
> used in the previous Intel drivers, which means it can be converted to
> use libeth's definitions and even helpers. The only difference is that
> it doesn't use a constant table (libie), rather than one obtained from
> the device.
> Remove the driver counterpart and use libeth's helpers for hashes and
> checksums. This slightly optimizes skb fields processing due to faster
> checks. Also don't define big static array of ptypes in &idpf_vport --
> allocate them dynamically. The pointer to it is anyway cached in
> &idpf_rx_queue.
>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> ---
>  drivers/net/ethernet/intel/idpf/Kconfig       |   1 +
>  drivers/net/ethernet/intel/idpf/idpf.h        |   2 +-
>  drivers/net/ethernet/intel/idpf/idpf_txrx.h   |  88 +-----------
>  drivers/net/ethernet/intel/idpf/idpf_lib.c    |   3 +
>  drivers/net/ethernet/intel/idpf/idpf_main.c   |   1 +
>  .../ethernet/intel/idpf/idpf_singleq_txrx.c   | 113 +++++++---------
>  drivers/net/ethernet/intel/idpf/idpf_txrx.c   | 125 +++++++-----------
>  .../net/ethernet/intel/idpf/idpf_virtchnl.c   |  69 ++++++----
>  8 files changed, 151 insertions(+), 251 deletions(-)
>
...
>   * idpf_send_get_rx_ptype_msg - Send virtchnl for ptype info
>   * @vport: virtual port data structure
> @@ -2526,7 +2541,7 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
>  {
>         struct virtchnl2_get_ptype_info *get_ptype_info __free(kfree) = NULL;
>         struct virtchnl2_get_ptype_info *ptype_info __free(kfree) = NULL;
> -       struct idpf_rx_ptype_decoded *ptype_lkup = vport->rx_ptype_lkup;
> +       struct libeth_rx_pt *ptype_lkup __free(kfree) = NULL;
>         int max_ptype, ptypes_recvd = 0, ptype_offset;
>         struct idpf_adapter *adapter = vport->adapter;
>         struct idpf_vc_xn_params xn_params = {};
> @@ -2534,12 +2549,17 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
>         ssize_t reply_sz;
>         int i, j, k;
>
> +       if (vport->rx_ptype_lkup)
> +               return 0;
> +
>         if (idpf_is_queue_model_split(vport->rxq_model))
>                 max_ptype = IDPF_RX_MAX_PTYPE;
>         else
>                 max_ptype = IDPF_RX_MAX_BASE_PTYPE;
>
> -       memset(vport->rx_ptype_lkup, 0, sizeof(vport->rx_ptype_lkup));
> +       ptype_lkup = kcalloc(max_ptype, sizeof(*ptype_lkup), GFP_KERNEL);
> +       if (!ptype_lkup)
> +               return -ENOMEM;
>
>         get_ptype_info = kzalloc(sizeof(*get_ptype_info), GFP_KERNEL);
>         if (!get_ptype_info)
> @@ -2604,9 +2624,6 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
>                         else
>                                 k = ptype->ptype_id_8;
>
> -                       if (ptype->proto_id_count)
> -                               ptype_lkup[k].known = 1;
> -
>                         for (j = 0; j < ptype->proto_id_count; j++) {
>                                 id = le16_to_cpu(ptype->proto_id[j]);
>                                 switch (id) {
> @@ -2614,18 +2631,18 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
>                                         if (pstate.tunnel_state ==
>                                                         IDPF_PTYPE_TUNNEL_IP) {
>                                                 ptype_lkup[k].tunnel_type =
> -                                               IDPF_RX_PTYPE_TUNNEL_IP_GRENAT;
> +                                               LIBETH_RX_PT_TUNNEL_IP_GRENAT;
>                                                 pstate.tunnel_state |=
>                                                 IDPF_PTYPE_TUNNEL_IP_GRENAT;
>                                         }
>                                         break;
>                                 case VIRTCHNL2_PROTO_HDR_MAC:
>                                         ptype_lkup[k].outer_ip =
> -                                               IDPF_RX_PTYPE_OUTER_L2;
> +                                               LIBETH_RX_PT_OUTER_L2;
>                                         if (pstate.tunnel_state ==
>                                                         IDPF_TUN_IP_GRE) {
>                                                 ptype_lkup[k].tunnel_type =
> -                                               IDPF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC;
> +                                               LIBETH_RX_PT_TUNNEL_IP_GRENAT_MAC;
>                                                 pstate.tunnel_state |=
>                                                 IDPF_PTYPE_TUNNEL_IP_GRENAT_MAC;
>                                         }
> @@ -2652,23 +2669,23 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
>                                         break;
>                                 case VIRTCHNL2_PROTO_HDR_UDP:
>                                         ptype_lkup[k].inner_prot =
> -                                       IDPF_RX_PTYPE_INNER_PROT_UDP;
> +                                       LIBETH_RX_PT_INNER_UDP;
>                                         break;
>                                 case VIRTCHNL2_PROTO_HDR_TCP:
>                                         ptype_lkup[k].inner_prot =
> -                                       IDPF_RX_PTYPE_INNER_PROT_TCP;
> +                                       LIBETH_RX_PT_INNER_TCP;
>                                         break;
>                                 case VIRTCHNL2_PROTO_HDR_SCTP:
>                                         ptype_lkup[k].inner_prot =
> -                                       IDPF_RX_PTYPE_INNER_PROT_SCTP;
> +                                       LIBETH_RX_PT_INNER_SCTP;
>                                         break;
>                                 case VIRTCHNL2_PROTO_HDR_ICMP:
>                                         ptype_lkup[k].inner_prot =
> -                                       IDPF_RX_PTYPE_INNER_PROT_ICMP;
> +                                       LIBETH_RX_PT_INNER_ICMP;
>                                         break;
>                                 case VIRTCHNL2_PROTO_HDR_PAY:
>                                         ptype_lkup[k].payload_layer =
> -                                               IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY2;
> +                                               LIBETH_RX_PT_PAYLOAD_L2;
>                                         break;
>                                 case VIRTCHNL2_PROTO_HDR_ICMPV6:
>                                 case VIRTCHNL2_PROTO_HDR_IPV6_EH:
> @@ -2722,9 +2739,13 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
>                                         break;
>                                 }
>                         }
> +
> +                       idpf_finalize_ptype_lookup(&ptype_lkup[k]);
>                 }
>         }
>
> +       vport->rx_ptype_lkup = no_free_ptr(ptype_lkup);
> +

Hi Olek,

I think you need to also patch up the early return from
idpf_send_get_rx_ptype_msg, otherwise vport->rx_ptype_lkup is not set
and I run into a later crash. Something like:

diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
index a0aaa849df24..80d9c09ff407 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -2629,7 +2629,7 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
                        /* 0xFFFF indicates end of ptypes */
                        if (le16_to_cpu(ptype->ptype_id_10) ==
                                                        IDPF_INVALID_PTYPE_ID)
-                               return 0;
+                               goto done;

                        if (idpf_is_queue_model_split(vport->rxq_model))
                                k = le16_to_cpu(ptype->ptype_id_10);
@@ -2756,6 +2756,7 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
                }
        }

+done:
        vport->rx_ptype_lkup = no_free_ptr(ptype_lkup);

        return 0;

-- 
Thanks,
Mina

  reply	other threads:[~2024-05-10 16:22 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-10 15:26 [PATCH RFC iwl-next 00/12] idpf: XDP chapter I: convert Rx to libeth Alexander Lobakin
2024-05-10 15:26 ` [PATCH RFC iwl-next 01/12] libeth: add cacheline / struct alignment helpers Alexander Lobakin
2024-05-10 15:26 ` [PATCH RFC iwl-next 02/12] idpf: stop using macros for accessing queue descriptors Alexander Lobakin
2024-05-16 17:45   ` Mina Almasry
2024-05-10 15:26 ` [PATCH RFC iwl-next 03/12] idpf: split &idpf_queue into 4 strictly-typed queue structures Alexander Lobakin
2024-05-10 15:26 ` [PATCH RFC iwl-next 04/12] idpf: avoid bloating &idpf_q_vector with big %NR_CPUS Alexander Lobakin
2024-05-10 15:26 ` [PATCH RFC iwl-next 05/12] idpf: strictly assert cachelines of queue and queue vector structures Alexander Lobakin
2024-05-10 15:26 ` [PATCH RFC iwl-next 06/12] idpf: merge singleq and splitq &net_device_ops Alexander Lobakin
2024-05-10 15:26 ` [PATCH RFC iwl-next 07/12] idpf: compile singleq code only under default-n CONFIG_IDPF_SINGLEQ Alexander Lobakin
2024-05-14  2:01   ` [Intel-wired-lan] " Tantilov, Emil S
2024-05-16 10:40     ` Alexander Lobakin
2024-05-10 15:26 ` [PATCH RFC iwl-next 08/12] idpf: reuse libeth's definitions of parsed ptype structures Alexander Lobakin
2024-05-10 16:22   ` Mina Almasry [this message]
2024-05-27 11:13     ` Alexander Lobakin
2024-05-10 15:26 ` [PATCH RFC iwl-next 09/12] idpf: remove legacy Page Pool Ethtool stats Alexander Lobakin
2024-05-10 15:26 ` [PATCH RFC iwl-next 10/12] libeth: support different types of buffers for Rx Alexander Lobakin
2024-05-10 15:26 ` [PATCH RFC iwl-next 11/12] idpf: convert header split mode to libeth + napi_build_skb() Alexander Lobakin
2024-05-10 15:26 ` [PATCH RFC iwl-next 12/12] idpf: use libeth Rx buffer management for payload buffer Alexander Lobakin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHS8izO7agxQ6nbc=BoK5KuYd_jgVLgJTbZbmEUqarfVn300Tw@mail.gmail.com' \
    --to=almasrymina@google.com \
    --cc=aleksander.lobakin@intel.com \
    --cc=anthony.l.nguyen@intel.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=nex.sw.ncis.osdt.itp.upstreaming@intel.com \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).