To answer your original question, yes, I've hit 400Gbps many times provided there were enough queues/cores. You can also see test 12.2 here achieved line rate:
Hi Thomas,
Thank you for getting back to this.
From what we have seen, it seemed to be rather a Mellanox firmware issue
than the pktgen or DPDK issue.
When we were using ConnectX-7 with fw ver 28.41.1000
on a Core i9 machine, it limited its tx-bandwidth to 100Gbps,
though the physical max should be 400Gbps.
If we lower the fw version to 28.39.2048 or 28.39.3004,
it could send 256Gbps, on the same core i9 machine.
Our bandwidth demonstration event completed successfully,
so we are fine with the small knowledge.
I think my colleague can help if someone wants to debug
further about this issue.
Thank you anyway.
Regards,
Yasu
2024年7月23日(火) 2:37 Thomas Monjalon <thomas@monjalon.net>:
>
> Hello,
>
> I see there is no answer.
>
> Did you try with testpmd?
> I don't know whether it can be a limitation of dpdk-pktgen?
>
>
>
> 29/05/2024 03:32, Yasuhiro Ohara:
> > Hi. My colleague is trying to generate 400Gbps traffic using ConnectX-7,
> > but with no luck.
> >
> > Has anyone succeeded in generating 400Gbps (by larger than 1500B is ok),
> > or are there any known issues?
> >
> > Using dpdk-pktgen, the link is successfully recognized as 400GbE
> > but when we start the traffic, it seems to be capped at 100Gbps.
> >
> > Some info follows.
> >
> > [ 1.490256] mlx5_core 0000:01:00.0: firmware version: 28.41.1000
> > [ 1.492853] mlx5_core 0000:01:00.0: 504.112 Gb/s available PCIe
> > bandwidth (32.0 GT/s PCIe x16 link)
> > [ 1.805419] mlx5_core 0000:01:00.0: Rate limit: 127 rates are
> > supported, range: 0Mbps to 195312Mbps
> > [ 1.808477] mlx5_core 0000:01:00.0: E-Switch: Total vports 18, per
> > vport: max uc(128) max mc(2048)
> > [ 1.827270] mlx5_core 0000:01:00.0: Port module event: module 0,
> > Cable unplugged
> > [ 1.830317] mlx5_core 0000:01:00.0: mlx5_pcie_event:298:(pid 9):
> > PCIe slot advertised sufficient power (75W).
> > [ 1.830610] mlx5_core 0000:01:00.0: MLX5E: StrdRq(1) RqSz(8)
> > StrdSz(2048) RxCqeCmprss(0)
> > [ 1.929813] mlx5_core 0000:01:00.0: Supported tc offload range -
> > chains: 4294967294, prios: 4294967295
> >
> > Device type: ConnectX7
> > Name: MCX75310AAS-NEA_Ax
> > Description: NVIDIA ConnectX-7 HHHL Adapter card; 400GbE / NDR IB
> > (default mode); Single-port OSFP; PCIe 5.0 x16; Crypto Disabled;
> > Secure Boot Enabled;
> > Device: /dev/mst/mt4129_pciconf0
> >
> > Enabled Link Speed (Ext.) : 0x00010000 (400G_4X)
> > Supported Cable Speed (Ext.) : 0x00010800 (400G_4X,100G_1X)
> >
> > pktgen result:
> >
> > Pkts/s Rx : 0
> > TX : 8059264
> >
> > MBits/s Rx/Tx : 0/97742
> >
> >
> > Any info is appreciated. Thanks.
> >
> > regards,
> > Yasu
> >
>
>
>
>
>