* ConnectX-7 400GbE wire-rate?
@ 2024-05-29 1:32 Yasuhiro Ohara
2024-07-22 17:37 ` Thomas Monjalon
0 siblings, 1 reply; 5+ messages in thread
From: Yasuhiro Ohara @ 2024-05-29 1:32 UTC (permalink / raw)
To: users
Hi. My colleague is trying to generate 400Gbps traffic using ConnectX-7,
but with no luck.
Has anyone succeeded in generating 400Gbps (by larger than 1500B is ok),
or are there any known issues?
Using dpdk-pktgen, the link is successfully recognized as 400GbE
but when we start the traffic, it seems to be capped at 100Gbps.
Some info follows.
[ 1.490256] mlx5_core 0000:01:00.0: firmware version: 28.41.1000
[ 1.492853] mlx5_core 0000:01:00.0: 504.112 Gb/s available PCIe
bandwidth (32.0 GT/s PCIe x16 link)
[ 1.805419] mlx5_core 0000:01:00.0: Rate limit: 127 rates are
supported, range: 0Mbps to 195312Mbps
[ 1.808477] mlx5_core 0000:01:00.0: E-Switch: Total vports 18, per
vport: max uc(128) max mc(2048)
[ 1.827270] mlx5_core 0000:01:00.0: Port module event: module 0,
Cable unplugged
[ 1.830317] mlx5_core 0000:01:00.0: mlx5_pcie_event:298:(pid 9):
PCIe slot advertised sufficient power (75W).
[ 1.830610] mlx5_core 0000:01:00.0: MLX5E: StrdRq(1) RqSz(8)
StrdSz(2048) RxCqeCmprss(0)
[ 1.929813] mlx5_core 0000:01:00.0: Supported tc offload range -
chains: 4294967294, prios: 4294967295
Device type: ConnectX7
Name: MCX75310AAS-NEA_Ax
Description: NVIDIA ConnectX-7 HHHL Adapter card; 400GbE / NDR IB
(default mode); Single-port OSFP; PCIe 5.0 x16; Crypto Disabled;
Secure Boot Enabled;
Device: /dev/mst/mt4129_pciconf0
Enabled Link Speed (Ext.) : 0x00010000 (400G_4X)
Supported Cable Speed (Ext.) : 0x00010800 (400G_4X,100G_1X)
pktgen result:
Pkts/s Rx : 0
TX : 8059264
MBits/s Rx/Tx : 0/97742
Any info is appreciated. Thanks.
regards,
Yasu
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: ConnectX-7 400GbE wire-rate?
2024-05-29 1:32 ConnectX-7 400GbE wire-rate? Yasuhiro Ohara
@ 2024-07-22 17:37 ` Thomas Monjalon
2024-07-24 1:19 ` Yasuhiro Ohara
0 siblings, 1 reply; 5+ messages in thread
From: Thomas Monjalon @ 2024-07-22 17:37 UTC (permalink / raw)
To: Yasuhiro Ohara; +Cc: users
Hello,
I see there is no answer.
Did you try with testpmd?
I don't know whether it can be a limitation of dpdk-pktgen?
29/05/2024 03:32, Yasuhiro Ohara:
> Hi. My colleague is trying to generate 400Gbps traffic using ConnectX-7,
> but with no luck.
>
> Has anyone succeeded in generating 400Gbps (by larger than 1500B is ok),
> or are there any known issues?
>
> Using dpdk-pktgen, the link is successfully recognized as 400GbE
> but when we start the traffic, it seems to be capped at 100Gbps.
>
> Some info follows.
>
> [ 1.490256] mlx5_core 0000:01:00.0: firmware version: 28.41.1000
> [ 1.492853] mlx5_core 0000:01:00.0: 504.112 Gb/s available PCIe
> bandwidth (32.0 GT/s PCIe x16 link)
> [ 1.805419] mlx5_core 0000:01:00.0: Rate limit: 127 rates are
> supported, range: 0Mbps to 195312Mbps
> [ 1.808477] mlx5_core 0000:01:00.0: E-Switch: Total vports 18, per
> vport: max uc(128) max mc(2048)
> [ 1.827270] mlx5_core 0000:01:00.0: Port module event: module 0,
> Cable unplugged
> [ 1.830317] mlx5_core 0000:01:00.0: mlx5_pcie_event:298:(pid 9):
> PCIe slot advertised sufficient power (75W).
> [ 1.830610] mlx5_core 0000:01:00.0: MLX5E: StrdRq(1) RqSz(8)
> StrdSz(2048) RxCqeCmprss(0)
> [ 1.929813] mlx5_core 0000:01:00.0: Supported tc offload range -
> chains: 4294967294, prios: 4294967295
>
> Device type: ConnectX7
> Name: MCX75310AAS-NEA_Ax
> Description: NVIDIA ConnectX-7 HHHL Adapter card; 400GbE / NDR IB
> (default mode); Single-port OSFP; PCIe 5.0 x16; Crypto Disabled;
> Secure Boot Enabled;
> Device: /dev/mst/mt4129_pciconf0
>
> Enabled Link Speed (Ext.) : 0x00010000 (400G_4X)
> Supported Cable Speed (Ext.) : 0x00010800 (400G_4X,100G_1X)
>
> pktgen result:
>
> Pkts/s Rx : 0
> TX : 8059264
>
> MBits/s Rx/Tx : 0/97742
>
>
> Any info is appreciated. Thanks.
>
> regards,
> Yasu
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: ConnectX-7 400GbE wire-rate?
2024-07-22 17:37 ` Thomas Monjalon
@ 2024-07-24 1:19 ` Yasuhiro Ohara
2024-07-24 14:49 ` Cliff Burdick
0 siblings, 1 reply; 5+ messages in thread
From: Yasuhiro Ohara @ 2024-07-24 1:19 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: users
Hi Thomas,
Thank you for getting back to this.
From what we have seen, it seemed to be rather a Mellanox firmware issue
than the pktgen or DPDK issue.
When we were using ConnectX-7 with fw ver 28.41.1000
on a Core i9 machine, it limited its tx-bandwidth to 100Gbps,
though the physical max should be 400Gbps.
If we lower the fw version to 28.39.2048 or 28.39.3004,
it could send 256Gbps, on the same core i9 machine.
Our bandwidth demonstration event completed successfully,
so we are fine with the small knowledge.
I think my colleague can help if someone wants to debug
further about this issue.
Thank you anyway.
Regards,
Yasu
2024年7月23日(火) 2:37 Thomas Monjalon <thomas@monjalon.net>:
>
> Hello,
>
> I see there is no answer.
>
> Did you try with testpmd?
> I don't know whether it can be a limitation of dpdk-pktgen?
>
>
>
> 29/05/2024 03:32, Yasuhiro Ohara:
> > Hi. My colleague is trying to generate 400Gbps traffic using ConnectX-7,
> > but with no luck.
> >
> > Has anyone succeeded in generating 400Gbps (by larger than 1500B is ok),
> > or are there any known issues?
> >
> > Using dpdk-pktgen, the link is successfully recognized as 400GbE
> > but when we start the traffic, it seems to be capped at 100Gbps.
> >
> > Some info follows.
> >
> > [ 1.490256] mlx5_core 0000:01:00.0: firmware version: 28.41.1000
> > [ 1.492853] mlx5_core 0000:01:00.0: 504.112 Gb/s available PCIe
> > bandwidth (32.0 GT/s PCIe x16 link)
> > [ 1.805419] mlx5_core 0000:01:00.0: Rate limit: 127 rates are
> > supported, range: 0Mbps to 195312Mbps
> > [ 1.808477] mlx5_core 0000:01:00.0: E-Switch: Total vports 18, per
> > vport: max uc(128) max mc(2048)
> > [ 1.827270] mlx5_core 0000:01:00.0: Port module event: module 0,
> > Cable unplugged
> > [ 1.830317] mlx5_core 0000:01:00.0: mlx5_pcie_event:298:(pid 9):
> > PCIe slot advertised sufficient power (75W).
> > [ 1.830610] mlx5_core 0000:01:00.0: MLX5E: StrdRq(1) RqSz(8)
> > StrdSz(2048) RxCqeCmprss(0)
> > [ 1.929813] mlx5_core 0000:01:00.0: Supported tc offload range -
> > chains: 4294967294, prios: 4294967295
> >
> > Device type: ConnectX7
> > Name: MCX75310AAS-NEA_Ax
> > Description: NVIDIA ConnectX-7 HHHL Adapter card; 400GbE / NDR IB
> > (default mode); Single-port OSFP; PCIe 5.0 x16; Crypto Disabled;
> > Secure Boot Enabled;
> > Device: /dev/mst/mt4129_pciconf0
> >
> > Enabled Link Speed (Ext.) : 0x00010000 (400G_4X)
> > Supported Cable Speed (Ext.) : 0x00010800 (400G_4X,100G_1X)
> >
> > pktgen result:
> >
> > Pkts/s Rx : 0
> > TX : 8059264
> >
> > MBits/s Rx/Tx : 0/97742
> >
> >
> > Any info is appreciated. Thanks.
> >
> > regards,
> > Yasu
> >
>
>
>
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: ConnectX-7 400GbE wire-rate?
2024-07-24 1:19 ` Yasuhiro Ohara
@ 2024-07-24 14:49 ` Cliff Burdick
2024-07-24 16:07 ` Yasuhiro Ohara
0 siblings, 1 reply; 5+ messages in thread
From: Cliff Burdick @ 2024-07-24 14:49 UTC (permalink / raw)
To: Yasuhiro Ohara; +Cc: Thomas Monjalon, users
[-- Attachment #1: Type: text/plain, Size: 3326 bytes --]
To answer your original question, yes, I've hit 400Gbps many times provided
there were enough queues/cores. You can also see test 12.2 here achieved
line rate:
https://fast.dpdk.org/doc/perf/DPDK_23_11_NVIDIA_NIC_performance_report.pdf
On Wed, Jul 24, 2024 at 4:58 AM Yasuhiro Ohara <yasu1976@gmail.com> wrote:
> Hi Thomas,
> Thank you for getting back to this.
>
> From what we have seen, it seemed to be rather a Mellanox firmware issue
> than the pktgen or DPDK issue.
> When we were using ConnectX-7 with fw ver 28.41.1000
> on a Core i9 machine, it limited its tx-bandwidth to 100Gbps,
> though the physical max should be 400Gbps.
> If we lower the fw version to 28.39.2048 or 28.39.3004,
> it could send 256Gbps, on the same core i9 machine.
>
> Our bandwidth demonstration event completed successfully,
> so we are fine with the small knowledge.
> I think my colleague can help if someone wants to debug
> further about this issue.
>
> Thank you anyway.
>
> Regards,
> Yasu
>
> 2024年7月23日(火) 2:37 Thomas Monjalon <thomas@monjalon.net>:
> >
> > Hello,
> >
> > I see there is no answer.
> >
> > Did you try with testpmd?
> > I don't know whether it can be a limitation of dpdk-pktgen?
> >
> >
> >
> > 29/05/2024 03:32, Yasuhiro Ohara:
> > > Hi. My colleague is trying to generate 400Gbps traffic using
> ConnectX-7,
> > > but with no luck.
> > >
> > > Has anyone succeeded in generating 400Gbps (by larger than 1500B is
> ok),
> > > or are there any known issues?
> > >
> > > Using dpdk-pktgen, the link is successfully recognized as 400GbE
> > > but when we start the traffic, it seems to be capped at 100Gbps.
> > >
> > > Some info follows.
> > >
> > > [ 1.490256] mlx5_core 0000:01:00.0: firmware version: 28.41.1000
> > > [ 1.492853] mlx5_core 0000:01:00.0: 504.112 Gb/s available PCIe
> > > bandwidth (32.0 GT/s PCIe x16 link)
> > > [ 1.805419] mlx5_core 0000:01:00.0: Rate limit: 127 rates are
> > > supported, range: 0Mbps to 195312Mbps
> > > [ 1.808477] mlx5_core 0000:01:00.0: E-Switch: Total vports 18, per
> > > vport: max uc(128) max mc(2048)
> > > [ 1.827270] mlx5_core 0000:01:00.0: Port module event: module 0,
> > > Cable unplugged
> > > [ 1.830317] mlx5_core 0000:01:00.0: mlx5_pcie_event:298:(pid 9):
> > > PCIe slot advertised sufficient power (75W).
> > > [ 1.830610] mlx5_core 0000:01:00.0: MLX5E: StrdRq(1) RqSz(8)
> > > StrdSz(2048) RxCqeCmprss(0)
> > > [ 1.929813] mlx5_core 0000:01:00.0: Supported tc offload range -
> > > chains: 4294967294, prios: 4294967295
> > >
> > > Device type: ConnectX7
> > > Name: MCX75310AAS-NEA_Ax
> > > Description: NVIDIA ConnectX-7 HHHL Adapter card; 400GbE / NDR IB
> > > (default mode); Single-port OSFP; PCIe 5.0 x16; Crypto Disabled;
> > > Secure Boot Enabled;
> > > Device: /dev/mst/mt4129_pciconf0
> > >
> > > Enabled Link Speed (Ext.) : 0x00010000 (400G_4X)
> > > Supported Cable Speed (Ext.) : 0x00010800 (400G_4X,100G_1X)
> > >
> > > pktgen result:
> > >
> > > Pkts/s Rx : 0
> > > TX : 8059264
> > >
> > > MBits/s Rx/Tx : 0/97742
> > >
> > >
> > > Any info is appreciated. Thanks.
> > >
> > > regards,
> > > Yasu
> > >
> >
> >
> >
> >
> >
>
[-- Attachment #2: Type: text/html, Size: 4370 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: ConnectX-7 400GbE wire-rate?
2024-07-24 14:49 ` Cliff Burdick
@ 2024-07-24 16:07 ` Yasuhiro Ohara
0 siblings, 0 replies; 5+ messages in thread
From: Yasuhiro Ohara @ 2024-07-24 16:07 UTC (permalink / raw)
To: Cliff Burdick; +Cc: Thomas Monjalon, users
Hi Cliff.
I saw it, and thought at that time that 200GbE x 2 might be different
from 400GbE x 1.
But good to know that you hit 400Gbps multiple times.
Thank you.
regards,
Yasu
2024年7月24日(水) 23:50 Cliff Burdick <shaklee3@gmail.com>:
>
> To answer your original question, yes, I've hit 400Gbps many times provided there were enough queues/cores. You can also see test 12.2 here achieved line rate:
>
> https://fast.dpdk.org/doc/perf/DPDK_23_11_NVIDIA_NIC_performance_report.pdf
>
> On Wed, Jul 24, 2024 at 4:58 AM Yasuhiro Ohara <yasu1976@gmail.com> wrote:
>>
>> Hi Thomas,
>> Thank you for getting back to this.
>>
>> From what we have seen, it seemed to be rather a Mellanox firmware issue
>> than the pktgen or DPDK issue.
>> When we were using ConnectX-7 with fw ver 28.41.1000
>> on a Core i9 machine, it limited its tx-bandwidth to 100Gbps,
>> though the physical max should be 400Gbps.
>> If we lower the fw version to 28.39.2048 or 28.39.3004,
>> it could send 256Gbps, on the same core i9 machine.
>>
>> Our bandwidth demonstration event completed successfully,
>> so we are fine with the small knowledge.
>> I think my colleague can help if someone wants to debug
>> further about this issue.
>>
>> Thank you anyway.
>>
>> Regards,
>> Yasu
>>
>> 2024年7月23日(火) 2:37 Thomas Monjalon <thomas@monjalon.net>:
>> >
>> > Hello,
>> >
>> > I see there is no answer.
>> >
>> > Did you try with testpmd?
>> > I don't know whether it can be a limitation of dpdk-pktgen?
>> >
>> >
>> >
>> > 29/05/2024 03:32, Yasuhiro Ohara:
>> > > Hi. My colleague is trying to generate 400Gbps traffic using ConnectX-7,
>> > > but with no luck.
>> > >
>> > > Has anyone succeeded in generating 400Gbps (by larger than 1500B is ok),
>> > > or are there any known issues?
>> > >
>> > > Using dpdk-pktgen, the link is successfully recognized as 400GbE
>> > > but when we start the traffic, it seems to be capped at 100Gbps.
>> > >
>> > > Some info follows.
>> > >
>> > > [ 1.490256] mlx5_core 0000:01:00.0: firmware version: 28.41.1000
>> > > [ 1.492853] mlx5_core 0000:01:00.0: 504.112 Gb/s available PCIe
>> > > bandwidth (32.0 GT/s PCIe x16 link)
>> > > [ 1.805419] mlx5_core 0000:01:00.0: Rate limit: 127 rates are
>> > > supported, range: 0Mbps to 195312Mbps
>> > > [ 1.808477] mlx5_core 0000:01:00.0: E-Switch: Total vports 18, per
>> > > vport: max uc(128) max mc(2048)
>> > > [ 1.827270] mlx5_core 0000:01:00.0: Port module event: module 0,
>> > > Cable unplugged
>> > > [ 1.830317] mlx5_core 0000:01:00.0: mlx5_pcie_event:298:(pid 9):
>> > > PCIe slot advertised sufficient power (75W).
>> > > [ 1.830610] mlx5_core 0000:01:00.0: MLX5E: StrdRq(1) RqSz(8)
>> > > StrdSz(2048) RxCqeCmprss(0)
>> > > [ 1.929813] mlx5_core 0000:01:00.0: Supported tc offload range -
>> > > chains: 4294967294, prios: 4294967295
>> > >
>> > > Device type: ConnectX7
>> > > Name: MCX75310AAS-NEA_Ax
>> > > Description: NVIDIA ConnectX-7 HHHL Adapter card; 400GbE / NDR IB
>> > > (default mode); Single-port OSFP; PCIe 5.0 x16; Crypto Disabled;
>> > > Secure Boot Enabled;
>> > > Device: /dev/mst/mt4129_pciconf0
>> > >
>> > > Enabled Link Speed (Ext.) : 0x00010000 (400G_4X)
>> > > Supported Cable Speed (Ext.) : 0x00010800 (400G_4X,100G_1X)
>> > >
>> > > pktgen result:
>> > >
>> > > Pkts/s Rx : 0
>> > > TX : 8059264
>> > >
>> > > MBits/s Rx/Tx : 0/97742
>> > >
>> > >
>> > > Any info is appreciated. Thanks.
>> > >
>> > > regards,
>> > > Yasu
>> > >
>> >
>> >
>> >
>> >
>> >
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-07-24 16:08 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-29 1:32 ConnectX-7 400GbE wire-rate? Yasuhiro Ohara
2024-07-22 17:37 ` Thomas Monjalon
2024-07-24 1:19 ` Yasuhiro Ohara
2024-07-24 14:49 ` Cliff Burdick
2024-07-24 16:07 ` Yasuhiro Ohara
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).