DPDK usage discussions
 help / color / mirror / Atom feed
From: Cliff Burdick <shaklee3@gmail.com>
To: Yasuhiro Ohara <yasu1976@gmail.com>
Cc: Thomas Monjalon <thomas@monjalon.net>, users@dpdk.org
Subject: Re: ConnectX-7 400GbE wire-rate?
Date: Wed, 24 Jul 2024 07:49:58 -0700	[thread overview]
Message-ID: <CA+Gp1naBuBYjzLsLWyCm=FCVUyFvtxBgSfT8QPEY21CdRLi6Tg@mail.gmail.com> (raw)
In-Reply-To: <CAJO98mOO1V2gzR-RzT+Q2Un_xMhOzNu4WYArm61acCqetc9fqQ@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 3326 bytes --]

To answer your original question, yes, I've hit 400Gbps many times provided
there were enough queues/cores. You can also see test 12.2 here achieved
line rate:

https://fast.dpdk.org/doc/perf/DPDK_23_11_NVIDIA_NIC_performance_report.pdf

On Wed, Jul 24, 2024 at 4:58 AM Yasuhiro Ohara <yasu1976@gmail.com> wrote:

> Hi Thomas,
> Thank you for getting back to this.
>
> From what we have seen, it seemed to be rather a Mellanox firmware issue
> than the pktgen or DPDK issue.
> When we were using ConnectX-7 with fw ver 28.41.1000
> on a Core i9 machine, it limited its tx-bandwidth to 100Gbps,
> though the physical max should be 400Gbps.
> If we lower the fw version to  28.39.2048 or 28.39.3004,
> it could send 256Gbps, on the same core i9 machine.
>
> Our bandwidth demonstration event completed successfully,
> so we are fine with the small knowledge.
> I think my colleague can help if someone wants to debug
> further about this issue.
>
> Thank you anyway.
>
> Regards,
> Yasu
>
> 2024年7月23日(火) 2:37 Thomas Monjalon <thomas@monjalon.net>:
> >
> > Hello,
> >
> > I see there is no answer.
> >
> > Did you try with testpmd?
> > I don't know whether it can be a limitation of dpdk-pktgen?
> >
> >
> >
> > 29/05/2024 03:32, Yasuhiro Ohara:
> > > Hi. My colleague is trying to generate 400Gbps traffic using
> ConnectX-7,
> > > but with no luck.
> > >
> > > Has anyone succeeded in generating 400Gbps (by larger than 1500B is
> ok),
> > > or are there any known issues?
> > >
> > > Using dpdk-pktgen, the link is successfully recognized as 400GbE
> > > but when we start the traffic, it seems to be capped at 100Gbps.
> > >
> > > Some info follows.
> > >
> > > [    1.490256] mlx5_core 0000:01:00.0: firmware version: 28.41.1000
> > > [    1.492853] mlx5_core 0000:01:00.0: 504.112 Gb/s available PCIe
> > > bandwidth (32.0 GT/s PCIe x16 link)
> > > [    1.805419] mlx5_core 0000:01:00.0: Rate limit: 127 rates are
> > > supported, range: 0Mbps to 195312Mbps
> > > [    1.808477] mlx5_core 0000:01:00.0: E-Switch: Total vports 18, per
> > > vport: max uc(128) max mc(2048)
> > > [    1.827270] mlx5_core 0000:01:00.0: Port module event: module 0,
> > > Cable unplugged
> > > [    1.830317] mlx5_core 0000:01:00.0: mlx5_pcie_event:298:(pid 9):
> > > PCIe slot advertised sufficient power (75W).
> > > [    1.830610] mlx5_core 0000:01:00.0: MLX5E: StrdRq(1) RqSz(8)
> > > StrdSz(2048) RxCqeCmprss(0)
> > > [    1.929813] mlx5_core 0000:01:00.0: Supported tc offload range -
> > > chains: 4294967294, prios: 4294967295
> > >
> > > Device type:    ConnectX7
> > > Name:           MCX75310AAS-NEA_Ax
> > > Description:    NVIDIA ConnectX-7 HHHL Adapter card; 400GbE / NDR IB
> > > (default mode); Single-port OSFP; PCIe 5.0 x16; Crypto Disabled;
> > > Secure Boot Enabled;
> > > Device:         /dev/mst/mt4129_pciconf0
> > >
> > > Enabled Link Speed (Ext.)       : 0x00010000 (400G_4X)
> > > Supported Cable Speed (Ext.)    : 0x00010800 (400G_4X,100G_1X)
> > >
> > > pktgen result:
> > >
> > > Pkts/s Rx         :              0
> > >            TX         :  8059264
> > >
> > > MBits/s Rx/Tx  :   0/97742
> > >
> > >
> > > Any info is appreciated. Thanks.
> > >
> > > regards,
> > > Yasu
> > >
> >
> >
> >
> >
> >
>

[-- Attachment #2: Type: text/html, Size: 4370 bytes --]

  reply	other threads:[~2024-07-24 14:50 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-29  1:32 Yasuhiro Ohara
2024-07-22 17:37 ` Thomas Monjalon
2024-07-24  1:19   ` Yasuhiro Ohara
2024-07-24 14:49     ` Cliff Burdick [this message]
2024-07-24 16:07       ` Yasuhiro Ohara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+Gp1naBuBYjzLsLWyCm=FCVUyFvtxBgSfT8QPEY21CdRLi6Tg@mail.gmail.com' \
    --to=shaklee3@gmail.com \
    --cc=thomas@monjalon.net \
    --cc=users@dpdk.org \
    --cc=yasu1976@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).