DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Mário Kuka" <kuka@cesnet.cz>
To: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Cc: dev@dpdk.org, orika@nvidia.com, bingz@nvidia.com, viktorin@cesnet.cz
Subject: Re: Hairpin Queues Throughput ConnectX-6
Date: Thu, 27 Jun 2024 13:42:38 +0200	[thread overview]
Message-ID: <82d8f67c-3b0b-46c2-a94b-8457d0c602c2@cesnet.cz> (raw)
In-Reply-To: <20240625032224.45b65339@sovereign>


[-- Attachment #1.1: Type: text/plain, Size: 4846 bytes --]

Hi Dmitry,

Thank you for your helpful reply.

> Try enabling "Explicit Tx rule" mode if possible.
> I was able to achieve 137 Mpps @ 64B with the following command:
>
> dpdk-testpmd -a 21:00.0 -a c1:00.0 --in-memory -- \
>      -i --rxq=1 --txq=1 --hairpinq=8 --hairpin-mode=0x10

Based o this I was able to achieve 142 Mpps(96.08 Gbps) @ 64B with the following command:

sudo dpdk-testpmd -l 0-1 -n 4 -a 0000:c4:00.0,hp_buf_log_sz=13 \
     --in-memory -- --rxq=1 --txq=1 --hairpinq=12 --hairpin-mode=0x10 -i
     
flow create 0 ingress pattern eth src is 00:10:94:00:00:02 / end actions rss queues 1 2 3 4 5 6 7 8 9 10 11 12 end / end

Almost full speed :).
Any other value of "hp_buf_log_sz" or more queues does not get better results, but instead makes them worse.

> RxQ pinned in device memory requires firmware configuration [1]:
>
> mlxconfig -y -d $pci_addr set MEMIC_SIZE_LIMIT=0 HAIRPIN_DATA_BUFFER_LOCK=1
> mlxfwreset -y -d $pci_addr reset
>
> [1]:https://doc.dpdk.org/guides/platform/mlx5.html?highlight=hairpin_data_buffer_lock
>
> However, pinned RxQ didn't improve anything for me.

I tried it, but it didn't improve anything for me either.

Mário

On 25/06/2024 02:22,  Kozlyuk wrote:
> Hi Mário,
>
> 2024-06-19 08:45 (UTC+0200), Mário Kuka:
>> Hello,
>>
>> I want to use hairpin queues to forward high priority traffic (such as
>> LACP).
>> My goal is to ensure that this traffic is not dropped in case the
>> software pipeline is overwhelmed.
>> But during testing with dpdk-testpmd I can't achieve full throughput for
>> hairpin queues.
> For maintainers: I'd like to express interest in this use case too.
>
>> The best result I have been able to achieve for 64B packets is 83 Gbps
>> in this configuration:
>> $ sudo dpdk-testpmd -l 0-1 -n 4 -a 0000:17:00.0,hp_buf_log_sz=19 --
>> --rxq=1 --txq=1 --rxd=4096 --txd=4096 --hairpinq=2
>> testpmd> flow create 0 ingress pattern eth src is 00:10:94:00:00:03 /
>> end actions rss queues 1 2 end / end
> Try enabling "Explicit Tx rule" mode if possible.
> I was able to achieve 137 Mpps @ 64B with the following command:
>
> dpdk-testpmd -a 21:00.0 -a c1:00.0 --in-memory -- \
>      -i --rxq=1 --txq=1 --hairpinq=8 --hairpin-mode=0x10
>
> You might get even better speed, because my flow rules were more complicated
> (RTE Flow based "router on-a-stick"):
>
> flow create 0 ingress group 1 pattern eth / vlan vid is 721 / end actions of_set_vlan_vid vlan_vid 722 / rss queues 1 2 3 4 5 6 7 8 end / end
> flow create 1 ingress group 1 pattern eth / vlan vid is 721 / end actions of_set_vlan_vid vlan_vid 722 / rss queues 1 2 3 4 5 6 7 8 end / end
> flow create 0 ingress group 1 pattern eth / vlan vid is 722 / end actions of_set_vlan_vid vlan_vid 721 / rss queues 1 2 3 4 5 6 7 8 end / end
> flow create 1 ingress group 1 pattern eth / vlan vid is 722 / end actions of_set_vlan_vid vlan_vid 721 / rss queues 1 2 3 4 5 6 7 8 end / end
> flow create 0 ingress group 0 pattern end actions jump group 1 / end
> flow create 1 ingress group 0 pattern end actions jump group 1 / end
>
>> For packets in the range 68-80B I measured even lower throughput.
>> Full throughput I measured only from packets larger than 112B
>>
>> For only one queue, I didn't get more than 55Gbps:
>> $ sudo dpdk-testpmd -l 0-1 -n 4 -a 0000:17:00.0,hp_buf_log_sz=19 --
>> --rxq=1 --txq=1 --rxd=4096 --txd=4096 --hairpinq=1 -i
>> testpmd> flow create 0 ingress pattern eth src is 00:10:94:00:00:03 /
>> end actions queue index 1 / end
>>
>> I tried to use locked device memory for TX and RX queues, but it seems
>> that this is not supported:
>> "--hairpin-mode=0x011000" (bit 16 - hairpin TX queues will use locked
>> device memory, bit 12 - hairpin RX queues will use locked device memory)
> RxQ pinned in device memory requires firmware configuration [1]:
>
> mlxconfig -y -d $pci_addr set MEMIC_SIZE_LIMIT=0 HAIRPIN_DATA_BUFFER_LOCK=1
> mlxfwreset -y -d $pci_addr reset
>
> [1]:https://doc.dpdk.org/guides/platform/mlx5.html?highlight=hairpin_data_buffer_lock
>
> However, pinned RxQ didn't improve anything for me.
>
> TxQ pinned in device memory is not supported by net/mlx5.
> TxQ pinned to DPDK memory made performance awful (predictably).
>
>> I was expecting that achieving full throughput with hairpin queues would
>> not be a problem.
>> Is my expectation too optimistic?
>>
>> What other parameters besides 'hp_buf_log_sz' can I use to achieve full
>> throughput?
> In my experiments, default "hp_buf_log_sz" of 16 is optimal.
> The most influential parameter appears to be the number of hairpin queues.
>
>> I tried combining the following parameters: mprq_en=, rxqs_min_mprq=,
>> mprq_log_stride_num=, txq_inline_mpw=, rxq_pkt_pad_en=,
>> but with no positive impact on throughput.


[-- Attachment #1.2: Type: text/html, Size: 6354 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4312 bytes --]

      reply	other threads:[~2024-06-27 11:54 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <fbfd6dd8-2cfc-406e-be90-350dc2fea02e@cesnet.cz>
2024-06-19  6:45 ` Mário Kuka
2024-06-25  0:22   ` Dmitry Kozlyuk
2024-06-27 11:42     ` Mário Kuka [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=82d8f67c-3b0b-46c2-a94b-8457d0c602c2@cesnet.cz \
    --to=kuka@cesnet.cz \
    --cc=bingz@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=dmitry.kozliuk@gmail.com \
    --cc=orika@nvidia.com \
    --cc=viktorin@cesnet.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).