DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] low Tx throughputs in DPDK with Mellanox ConnectX-3 card
@ 2017-07-21 16:05 zhilong zheng
  2017-07-24 18:20 ` Adrien Mazarguil
  0 siblings, 1 reply; 4+ messages in thread
From: zhilong zheng @ 2017-07-21 16:05 UTC (permalink / raw)
  To: users

Hi all,

I have some problem when generating packets to the Mellanox ConnectX-3 dual 40G ports card from the latest pktgen-dpdk.

The problem is that it can only generate ~22Gbps per port (actually I just use one port.), not saturating the 40G port. This server has two 12-cores E5-2650 v4 @2.20GHz cpus and 128G 2400MHz DDR4 memory. The DPDK version is 16.11.

This is the driver bound to the NIC:   0000:81:00.0 'MT27500 Family [ConnectX-3]' if=p6p1,p6p2 drv=mlx4_core unused=
I guess that it’s the problem of driver. The document shows the driver name should be librte_pmd_mlx4 (url: http://dpdk.org/doc/guides/nics/mlx4.html <http://dpdk.org/doc/guides/nics/mlx4.html>), however when completing installation, it’s bound to mlx4_core.

Any clue about this problem? And whether it’s caused by the driver or others?

Many thanks,
Zhilong.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] low Tx throughputs in DPDK with Mellanox ConnectX-3 card
  2017-07-21 16:05 [dpdk-users] low Tx throughputs in DPDK with Mellanox ConnectX-3 card zhilong zheng
@ 2017-07-24 18:20 ` Adrien Mazarguil
  2017-07-25 13:20   ` zhilong zheng
  0 siblings, 1 reply; 4+ messages in thread
From: Adrien Mazarguil @ 2017-07-24 18:20 UTC (permalink / raw)
  To: zhilong zheng; +Cc: users

Hi Zhilong,

On Sat, Jul 22, 2017 at 12:05:51AM +0800, zhilong zheng wrote:
> Hi all,
> 
> I have some problem when generating packets to the Mellanox ConnectX-3 dual 40G ports card from the latest pktgen-dpdk.
> 
> The problem is that it can only generate ~22Gbps per port (actually I just use one port.), not saturating the 40G port. This server has two 12-cores E5-2650 v4 @2.20GHz cpus and 128G 2400MHz DDR4 memory. The DPDK version is 16.11.
> 
> This is the driver bound to the NIC:   0000:81:00.0 'MT27500 Family [ConnectX-3]' if=p6p1,p6p2 drv=mlx4_core unused=
> I guess that it’s the problem of driver. The document shows the driver name should be librte_pmd_mlx4 (url: http://dpdk.org/doc/guides/nics/mlx4.html <http://dpdk.org/doc/guides/nics/mlx4.html>), however when completing installation, it’s bound to mlx4_core.

It's OK, mlx4_core is the name of the kernel driver while librte_pmd_mlx4 is
that of the DPDK driver. There is no librte_pmd_mlx4 kernel module, see
blurb about prerequisites [1].

> Any clue about this problem? And whether it’s caused by the driver or others?

Depending on packet size and other configuration settings, you may have hit
the maximum packet rate, these devices cannot reach line rate with 64-byte
packets for instance.

[1] http://dpdk.org/doc/guides/nics/mlx4.html#prerequisites

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] low Tx throughputs in DPDK with Mellanox ConnectX-3 card
  2017-07-24 18:20 ` Adrien Mazarguil
@ 2017-07-25 13:20   ` zhilong zheng
  2017-07-25 14:06     ` Adrien Mazarguil
  0 siblings, 1 reply; 4+ messages in thread
From: zhilong zheng @ 2017-07-25 13:20 UTC (permalink / raw)
  To: Adrien Mazarguil; +Cc: users

Hi Adrien,

Thanks for your reply and suggestion. I change the packet size to 128B, it can generate ~34Gbps and ~40Gbps while to 256B and larger.

Regards,
Zhilong

> 在 2017年7月25日,02:20,Adrien Mazarguil <adrien.mazarguil@6wind.com> 写道:
> 
> Hi Zhilong,
> 
> On Sat, Jul 22, 2017 at 12:05:51AM +0800, zhilong zheng wrote:
>> Hi all,
>> 
>> I have some problem when generating packets to the Mellanox ConnectX-3 dual 40G ports card from the latest pktgen-dpdk.
>> 
>> The problem is that it can only generate ~22Gbps per port (actually I just use one port.), not saturating the 40G port. This server has two 12-cores E5-2650 v4 @2.20GHz cpus and 128G 2400MHz DDR4 memory. The DPDK version is 16.11.
>> 
>> This is the driver bound to the NIC:   0000:81:00.0 'MT27500 Family [ConnectX-3]' if=p6p1,p6p2 drv=mlx4_core unused=
>> I guess that it’s the problem of driver. The document shows the driver name should be librte_pmd_mlx4 (url: http://dpdk.org/doc/guides/nics/mlx4.html <http://dpdk.org/doc/guides/nics/mlx4.html>), however when completing installation, it’s bound to mlx4_core.
> 
> It's OK, mlx4_core is the name of the kernel driver while librte_pmd_mlx4 is
> that of the DPDK driver. There is no librte_pmd_mlx4 kernel module, see
> blurb about prerequisites [1].
> 
>> Any clue about this problem? And whether it’s caused by the driver or others?
> 
> Depending on packet size and other configuration settings, you may have hit
> the maximum packet rate, these devices cannot reach line rate with 64-byte
> packets for instance.
> 
> [1] http://dpdk.org/doc/guides/nics/mlx4.html#prerequisites
> 
> -- 
> Adrien Mazarguil
> 6WIND

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] low Tx throughputs in DPDK with Mellanox ConnectX-3 card
  2017-07-25 13:20   ` zhilong zheng
@ 2017-07-25 14:06     ` Adrien Mazarguil
  0 siblings, 0 replies; 4+ messages in thread
From: Adrien Mazarguil @ 2017-07-25 14:06 UTC (permalink / raw)
  To: zhilong zheng; +Cc: users

Hi Zhilong,

On Tue, Jul 25, 2017 at 09:20:38PM +0800, zhilong zheng wrote:
> Hi Adrien,
> 
> Thanks for your reply and suggestion. I change the packet size to 128B, it can generate ~34Gbps and ~40Gbps while to 256B and larger.

OK, looks like it scales properly, I think you cannot get much more than
this with small packets.

> > 在 2017年7月25日,02:20,Adrien Mazarguil <adrien.mazarguil@6wind.com> 写道:
> > 
> > Hi Zhilong,
> > 
> > On Sat, Jul 22, 2017 at 12:05:51AM +0800, zhilong zheng wrote:
> >> Hi all,
> >> 
> >> I have some problem when generating packets to the Mellanox ConnectX-3 dual 40G ports card from the latest pktgen-dpdk.
> >> 
> >> The problem is that it can only generate ~22Gbps per port (actually I just use one port.), not saturating the 40G port. This server has two 12-cores E5-2650 v4 @2.20GHz cpus and 128G 2400MHz DDR4 memory. The DPDK version is 16.11.
> >> 
> >> This is the driver bound to the NIC:   0000:81:00.0 'MT27500 Family [ConnectX-3]' if=p6p1,p6p2 drv=mlx4_core unused=
> >> I guess that it’s the problem of driver. The document shows the driver name should be librte_pmd_mlx4 (url: http://dpdk.org/doc/guides/nics/mlx4.html <http://dpdk.org/doc/guides/nics/mlx4.html>), however when completing installation, it’s bound to mlx4_core.
> > 
> > It's OK, mlx4_core is the name of the kernel driver while librte_pmd_mlx4 is
> > that of the DPDK driver. There is no librte_pmd_mlx4 kernel module, see
> > blurb about prerequisites [1].
> > 
> >> Any clue about this problem? And whether it’s caused by the driver or others?
> > 
> > Depending on packet size and other configuration settings, you may have hit
> > the maximum packet rate, these devices cannot reach line rate with 64-byte
> > packets for instance.
> > 
> > [1] http://dpdk.org/doc/guides/nics/mlx4.html#prerequisites
> > 
> > -- 
> > Adrien Mazarguil
> > 6WIND
> 

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2017-07-25 14:06 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-21 16:05 [dpdk-users] low Tx throughputs in DPDK with Mellanox ConnectX-3 card zhilong zheng
2017-07-24 18:20 ` Adrien Mazarguil
2017-07-25 13:20   ` zhilong zheng
2017-07-25 14:06     ` Adrien Mazarguil

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).