DPDK usage discussions
 help / color / mirror / Atom feed
From: Yasuhiro Ohara <yasu1976@gmail.com>
To: Cliff Burdick <shaklee3@gmail.com>
Cc: Thomas Monjalon <thomas@monjalon.net>, users@dpdk.org
Subject: Re: ConnectX-7 400GbE wire-rate?
Date: Thu, 25 Jul 2024 01:07:44 +0900	[thread overview]
Message-ID: <CAJO98mPG0LXbzVRM9GniUg3pnee2e9kXF9=a7x4NNjb01ZmmCg@mail.gmail.com> (raw)
In-Reply-To: <CA+Gp1naBuBYjzLsLWyCm=FCVUyFvtxBgSfT8QPEY21CdRLi6Tg@mail.gmail.com>

Hi Cliff.

I saw it, and thought at that time that 200GbE x 2 might be different
from 400GbE x 1.
But good to know that you hit 400Gbps multiple times.

Thank you.

regards,
Yasu


2024年7月24日(水) 23:50 Cliff Burdick <shaklee3@gmail.com>:
>
> To answer your original question, yes, I've hit 400Gbps many times provided there were enough queues/cores. You can also see test 12.2 here achieved line rate:
>
> https://fast.dpdk.org/doc/perf/DPDK_23_11_NVIDIA_NIC_performance_report.pdf
>
> On Wed, Jul 24, 2024 at 4:58 AM Yasuhiro Ohara <yasu1976@gmail.com> wrote:
>>
>> Hi Thomas,
>> Thank you for getting back to this.
>>
>> From what we have seen, it seemed to be rather a Mellanox firmware issue
>> than the pktgen or DPDK issue.
>> When we were using ConnectX-7 with fw ver 28.41.1000
>> on a Core i9 machine, it limited its tx-bandwidth to 100Gbps,
>> though the physical max should be 400Gbps.
>> If we lower the fw version to  28.39.2048 or 28.39.3004,
>> it could send 256Gbps, on the same core i9 machine.
>>
>> Our bandwidth demonstration event completed successfully,
>> so we are fine with the small knowledge.
>> I think my colleague can help if someone wants to debug
>> further about this issue.
>>
>> Thank you anyway.
>>
>> Regards,
>> Yasu
>>
>> 2024年7月23日(火) 2:37 Thomas Monjalon <thomas@monjalon.net>:
>> >
>> > Hello,
>> >
>> > I see there is no answer.
>> >
>> > Did you try with testpmd?
>> > I don't know whether it can be a limitation of dpdk-pktgen?
>> >
>> >
>> >
>> > 29/05/2024 03:32, Yasuhiro Ohara:
>> > > Hi. My colleague is trying to generate 400Gbps traffic using ConnectX-7,
>> > > but with no luck.
>> > >
>> > > Has anyone succeeded in generating 400Gbps (by larger than 1500B is ok),
>> > > or are there any known issues?
>> > >
>> > > Using dpdk-pktgen, the link is successfully recognized as 400GbE
>> > > but when we start the traffic, it seems to be capped at 100Gbps.
>> > >
>> > > Some info follows.
>> > >
>> > > [    1.490256] mlx5_core 0000:01:00.0: firmware version: 28.41.1000
>> > > [    1.492853] mlx5_core 0000:01:00.0: 504.112 Gb/s available PCIe
>> > > bandwidth (32.0 GT/s PCIe x16 link)
>> > > [    1.805419] mlx5_core 0000:01:00.0: Rate limit: 127 rates are
>> > > supported, range: 0Mbps to 195312Mbps
>> > > [    1.808477] mlx5_core 0000:01:00.0: E-Switch: Total vports 18, per
>> > > vport: max uc(128) max mc(2048)
>> > > [    1.827270] mlx5_core 0000:01:00.0: Port module event: module 0,
>> > > Cable unplugged
>> > > [    1.830317] mlx5_core 0000:01:00.0: mlx5_pcie_event:298:(pid 9):
>> > > PCIe slot advertised sufficient power (75W).
>> > > [    1.830610] mlx5_core 0000:01:00.0: MLX5E: StrdRq(1) RqSz(8)
>> > > StrdSz(2048) RxCqeCmprss(0)
>> > > [    1.929813] mlx5_core 0000:01:00.0: Supported tc offload range -
>> > > chains: 4294967294, prios: 4294967295
>> > >
>> > > Device type:    ConnectX7
>> > > Name:           MCX75310AAS-NEA_Ax
>> > > Description:    NVIDIA ConnectX-7 HHHL Adapter card; 400GbE / NDR IB
>> > > (default mode); Single-port OSFP; PCIe 5.0 x16; Crypto Disabled;
>> > > Secure Boot Enabled;
>> > > Device:         /dev/mst/mt4129_pciconf0
>> > >
>> > > Enabled Link Speed (Ext.)       : 0x00010000 (400G_4X)
>> > > Supported Cable Speed (Ext.)    : 0x00010800 (400G_4X,100G_1X)
>> > >
>> > > pktgen result:
>> > >
>> > > Pkts/s Rx         :              0
>> > >            TX         :  8059264
>> > >
>> > > MBits/s Rx/Tx  :   0/97742
>> > >
>> > >
>> > > Any info is appreciated. Thanks.
>> > >
>> > > regards,
>> > > Yasu
>> > >
>> >
>> >
>> >
>> >
>> >

      reply	other threads:[~2024-07-24 16:08 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-29  1:32 Yasuhiro Ohara
2024-07-22 17:37 ` Thomas Monjalon
2024-07-24  1:19   ` Yasuhiro Ohara
2024-07-24 14:49     ` Cliff Burdick
2024-07-24 16:07       ` Yasuhiro Ohara [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJO98mPG0LXbzVRM9GniUg3pnee2e9kXF9=a7x4NNjb01ZmmCg@mail.gmail.com' \
    --to=yasu1976@gmail.com \
    --cc=shaklee3@gmail.com \
    --cc=thomas@monjalon.net \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).