DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Mário Kuka" <kuka@cesnet.cz>
To: dev@dpdk.org
Cc: orika@nvidia.com, bingz@nvidia.com, viktorin@cesnet.cz
Subject: Hairpin Queues Throughput ConnectX-6
Date: Wed, 19 Jun 2024 08:45:21 +0200	[thread overview]
Message-ID: <3d746dbc-330e-403f-b87f-bf495cac3437@cesnet.cz> (raw)
In-Reply-To: <fbfd6dd8-2cfc-406e-be90-350dc2fea02e@cesnet.cz>


[-- Attachment #1.1: Type: text/plain, Size: 2181 bytes --]

Hello,

I want to use hairpin queues to forward high priority traffic (such as 
LACP).
My goal is to ensure that this traffic is not dropped in case the 
software pipeline is overwhelmed.
But during testing with dpdk-testpmd I can't achieve full throughput for 
hairpin queues.

The best result I have been able to achieve for 64B packets is 83 Gbps 
in this configuration:
$ sudo dpdk-testpmd -l 0-1 -n 4 -a 0000:17:00.0,hp_buf_log_sz=19 -- 
--rxq=1 --txq=1 --rxd=4096 --txd=4096 --hairpinq=2
testpmd> flow create 0 ingress pattern eth src is 00:10:94:00:00:03 / 
end actions rss queues 1 2 end / end

For packets in the range 68-80B I measured even lower throughput.
Full throughput I measured only from packets larger than 112B

For only one queue, I didn't get more than 55Gbps:
$ sudo dpdk-testpmd -l 0-1 -n 4 -a 0000:17:00.0,hp_buf_log_sz=19 -- 
--rxq=1 --txq=1 --rxd=4096 --txd=4096 --hairpinq=1 -i
testpmd> flow create 0 ingress pattern eth src is 00:10:94:00:00:03 / 
end actions queue index 1 / end

I tried to use locked device memory for TX and RX queues, but it seems 
that this is not supported:
"--hairpin-mode=0x011000" (bit 16 - hairpin TX queues will use locked 
device memory, bit 12 - hairpin RX queues will use locked device memory)

I was expecting that achieving full throughput with hairpin queues would 
not be a problem.
Is my expectation too optimistic?

What other parameters besides 'hp_buf_log_sz' can I use to achieve full 
throughput?
I tried combining the following parameters: mprq_en=, rxqs_min_mprq=, 
mprq_log_stride_num=, txq_inline_mpw=, rxq_pkt_pad_en=,
but with no positive impact on throughput.

My setup:
DPDK version: commit 76cef1af8bdaeaf67a5c4ca5df3f221df994dc46 (HEAD -> 
main, origin/main, origin/HEAD) Date: Wed Apr 3 11:23:20 2024 -0700
OFED version: MLNX_OFED_LINUX-23.10-0.5.5.0 (OFED-23.10-0.5.5)
ConnectX-6 device: 0000:17:00.0 'MT2892 Family [ConnectX-6 Dx] 101d' 
if=ens1f0np0 drv=mlx5_core unused=
PCIe version: 4.0
OS: Oracle Linux Server 8.10

Any guidance or suggestions on how to achieve full throughput would be 
greatly appreciated.

Thank you,
Mário Kuka




[-- Attachment #1.2: Type: text/html, Size: 4355 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4312 bytes --]

       reply	other threads:[~2024-06-19  6:45 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <fbfd6dd8-2cfc-406e-be90-350dc2fea02e@cesnet.cz>
2024-06-19  6:45 ` Mário Kuka [this message]
2024-06-25  0:22   ` Dmitry Kozlyuk
2024-06-27 11:42     ` Mário Kuka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3d746dbc-330e-403f-b87f-bf495cac3437@cesnet.cz \
    --to=kuka@cesnet.cz \
    --cc=bingz@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=orika@nvidia.com \
    --cc=viktorin@cesnet.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).