From: "Mostafa, Jalal (IPE)" <jalal.mostafa@kit.edu>
To: <xdp-newbies@vger.kernel.org>
Subject: Re: AF_XDP-example/txonly: strange performance with different packet sizes
Date: Mon, 31 Oct 2022 23:40:10 +0100 [thread overview]
Message-ID: <feb90fb7-31e3-b111-76d7-a8df3420253e@kit.edu> (raw)
In-Reply-To: <7779608e-0207-a4dd-0979-90ef4ac46e86@kit.edu>
Hi,
MPWQE apparently causes this in mlx5. MPWQE does not provide true
zero-copy for packet sizes less than 256bytes. In order not to saturate
the PCIe bus, MPWQE "copies" multiple small packets into one fixed-size
memory block and then send them in one work queue entry which explains
the degrading performance from packet sizes 64B to 256B.
https://github.com/torvalds/linux/blob/5aaef24b5c6d4246b2cac1be949869fa36577737/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h#L159
MPWQE can be turned off using: ethtool --set-priv-flags enp2s0np0
xdp_tx_mpwqe off
Performance reaches 25M pps on 64B and 128B, and 18M on 256 saturating a
40Gbps link.
Best,
Jalal
On 10/26/22 12:34, Mostafa, Jalal (IPE) wrote:
> Hi,
>
> I am running the txonly microbenchmark from the AF_XDP-example sample
> program
> in bpf-examples. Howerver, I noticed strange performance with
> different tx packet sizes.
>
> With the following command I generate the below results:
>
> ./xdpsock -i enp2s0np0 -q 4 -t -b 2048 -m -s 128 -m
>
> Driver: mlx5 on 40Gbps ConnectX-6
> Kernel version: 6.0.0-rc6+
>
> | Pkt Size | Native-ZC PPS | Native-C PPS | Generic PPS |
> | ----------- | -------------------- | ------------------ |
> ----------------- |
> | 64 | 16.5M | 1.73M | 1.71M |
> | 128 | 9.42M | 1.72M | 1.66M |
> | 256 | 7.78M | 1.64M | 1.66M |
> | 512 | 9.39M | 1.62M | 1.59M |
> | 1024 | 4.78M | 1.42M | 1.38M |
>
> At size 128B, I expect 16.5M packets (the limiting performance of
> AF_XDP) since
> the link is not saturated. The problem is more obvious at size 256B,
> only 7.78B
> even though it jumps again to 9.39M at 512. So I think the problem is
> related to
> the packet size and not to a limited performance in the xsk engine in
> the kernel.
> Or what do you think?
>
> I have already opened an issue on github here:
>
> https://github.com/xdp-project/bpf-examples/issues/61
>
> Best,
> Jalal
>
prev parent reply other threads:[~2022-10-31 22:40 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-26 10:34 AF_XDP-example/txonly: strange performance with different packet sizes Mostafa, Jalal (IPE)
2022-10-31 22:40 ` Mostafa, Jalal (IPE) [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=feb90fb7-31e3-b111-76d7-a8df3420253e@kit.edu \
--to=jalal.mostafa@kit.edu \
--cc=xdp-newbies@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).