DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Pktgen Only Enables One RX Queue with Mellanox ConnectX-5
@ 2019-06-05 22:08 David Christensen
  2019-06-05 22:30 ` Wiles, Keith
  0 siblings, 1 reply; 3+ messages in thread
From: David Christensen @ 2019-06-05 22:08 UTC (permalink / raw)
  To: dev

Using a Mellanox ConnectX-5 @ 100Gb on a Supermicro system with DPDK 
18.11.1 and pktgen 3.6.6 from the Master branch (downloaded today). 
When I configure pktgen to use multiple queues, I'm able to set the 
number of TX queues correctly but the number of RX queues doesn't look 
correct.  pktgen indicates the correct number of queues during startup, 
but the stats and xstats pages always show traffic going to a single RX 
queue.

For example, here's how pktgen starts:

$ sudo -E 
LD_LIBRARY_PATH=/home/davec/src/dpdk/x86_64-native-linuxapp-gcc/lib 
/home/davec/src/pktgen/app/x86_64-native-linuxapp-gcc/pktgen -l 1,2-13 
-w 04:00.0 -w 04:00.1 -n 3 -- -P -m "[2-7:2-7].0, [8-13:8-13].1";

Copyright (c) <2010-2019>, Intel Corporation. All rights reserved. 
Powered by DPDK
EAL: Detected 56 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 15b3:1019 net_mlx5
EAL: PCI device 0000:04:00.1 on NUMA socket 0
EAL:   probe driver: 15b3:1019 net_mlx5
Lua 5.3.5  Copyright (C) 1994-2018 Lua.org, PUC-Rio

*** Copyright (c) <2010-2019>, Intel Corporation. All rights reserved.
*** Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<

Initialize Port 0 -- TxQ 6, RxQ 6,  Src MAC ec:0d:9a:ca:b4:98
Initialize Port 1 -- TxQ 6, RxQ 6,  Src MAC ec:0d:9a:ca:b4:99

Port  0: Link Up - speed 100000 Mbps - full-duplex <Enable promiscuous mode>
Port  1: Link Up - speed 100000 Mbps - full-duplex <Enable promiscuous mode>

   RX/TX processing lcore:   2 rx:  1 tx:  1
   RX/TX processing lcore:   3 rx:  1 tx:  1
   RX/TX processing lcore:   4 rx:  1 tx:  1
   RX/TX processing lcore:   5 rx:  1 tx:  1
   RX/TX processing lcore:   6 rx:  1 tx:  1
   RX/TX processing lcore:   7 rx:  1 tx:  1
   RX/TX processing lcore:   8 rx:  1 tx:  1
   RX/TX processing lcore:   9 rx:  1 tx:  1
   RX/TX processing lcore:  10 rx:  1 tx:  1
   RX/TX processing lcore:  11 rx:  1 tx:  1
   RX/TX processing lcore:  12 rx:  1 tx:  1
   RX/TX processing lcore:  13 rx:  1 tx:  1

Note that the number of RX queues is correct.

I use the following commands to start generating traffic.  (The link 
partner is running DPDK 18.11.1 with the testpmd app configured for "io" 
forwarding.)

set all size 64
set all rate 100
set all count 0
set all burst 16
range all src port 1 1 1023 1
range all dst ip 10.0.0.0 10.0.0.0 10.0.255.255 0.0.0.1
range all src ip 10.0.0.0 10.0.0.0 10.0.255.255 0.0.0.1
range 0 src mac 00:00:00:00:00:00 00:00:00:00:00:00 00:12:34:56:78:90 
00:00:00:01:01:01
range 0 dst mac 00:20:00:00:00:00 00:20:00:00:00:00 00:98:76:54:32:10 
00:00:00:01:01:01
range all size 64 64 64 0
enable all range
start all

Later, when I'm running traffic, the statistics show something different:

Pktgen:/> page stats
|                  <Real Port Stats Page>  Copyright (c) <2010-2019>, 
Intel Corporation

Port 0                        Pkts Rx/Tx      Rx Errors/Missed 
  Rate Rx/Tx           MAC Address
  0-04:00.0          542522040/2806635488                   0/0 
10182516/51993536     EC:0D:9A:CA:B4:98

                  ipackets       opackets         ibytes         obytes 
        errors
      Q  0:      542522040      546551712    32551322400    32793102720 
             0
      Q  1:              0      451205888              0    27072353280 
             0
      Q  2:              0      457296176              0    27437770560 
             0
      Q  3:              0      455300832              0    27318049920 
             0
      Q  4:              0      442654816              0    26559288960 
             0
      Q  5:              0      453626064              0    27217563840 
             0
      Q  6:              0              0              0              0 
             0
      Q  7:              0              0              0              0 
             0
      Q  8:              0              0              0              0 
             0
      Q  9:              0              0              0              0 
             0
      Q 10:              0              0              0              0 
             0
      Q 11:              0              0              0              0 
             0
      Q 12:              0              0              0              0 
             0
      Q 13:              0              0              0              0 
             0
      Q 14:              0              0              0              0 
             0
      Q 15:              0              0              0              0 
             0
-- Pktgen Ver: 3.6.6 (DPDK 18.11.1)  Powered by DPDK  (pid:15485)

Traffic is only received on RX queue 0.  Anyone run into this?  The link 
partner shows traffic received and transmitted on all configured queues 
(16 in this case) so I don't think the link partner is dropping traffic 
in such a way that the remaining traffic flows to a single RX queue on 
the SUT.

Dave

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] Pktgen Only Enables One RX Queue with Mellanox ConnectX-5
  2019-06-05 22:08 [dpdk-dev] Pktgen Only Enables One RX Queue with Mellanox ConnectX-5 David Christensen
@ 2019-06-05 22:30 ` Wiles, Keith
  2019-06-05 23:10   ` David Christensen
  0 siblings, 1 reply; 3+ messages in thread
From: Wiles, Keith @ 2019-06-05 22:30 UTC (permalink / raw)
  To: David Christensen; +Cc: dev



> On Jun 5, 2019, at 5:08 PM, David Christensen <drc@linux.vnet.ibm.com> wrote:
> 
> Using a Mellanox ConnectX-5 @ 100Gb on a Supermicro system with DPDK 18.11.1 and pktgen 3.6.6 from the Master branch (downloaded today). When I configure pktgen to use multiple queues, I'm able to set the number of TX queues correctly but the number of RX queues doesn't look correct.  pktgen indicates the correct number of queues during startup, but the stats and xstats pages always show traffic going to a single RX queue.
> 
> For example, here's how pktgen starts:
> 
> $ sudo -E LD_LIBRARY_PATH=/home/davec/src/dpdk/x86_64-native-linuxapp-gcc/lib /home/davec/src/pktgen/app/x86_64-native-linuxapp-gcc/pktgen -l 1,2-13 -w 04:00.0 -w 04:00.1 -n 3 -- -P -m "[2-7:2-7].0, [8-13:8-13].1";
> 
> Copyright (c) <2010-2019>, Intel Corporation. All rights reserved. Powered by DPDK
> EAL: Detected 56 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Probing VFIO support...
> EAL: PCI device 0000:04:00.0 on NUMA socket 0
> EAL:   probe driver: 15b3:1019 net_mlx5
> EAL: PCI device 0000:04:00.1 on NUMA socket 0
> EAL:   probe driver: 15b3:1019 net_mlx5
> Lua 5.3.5  Copyright (C) 1994-2018 Lua.org, PUC-Rio
> 
> *** Copyright (c) <2010-2019>, Intel Corporation. All rights reserved.
> *** Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<
> 
> Initialize Port 0 -- TxQ 6, RxQ 6,  Src MAC ec:0d:9a:ca:b4:98
> Initialize Port 1 -- TxQ 6, RxQ 6,  Src MAC ec:0d:9a:ca:b4:99
> 
> Port  0: Link Up - speed 100000 Mbps - full-duplex <Enable promiscuous mode>
> Port  1: Link Up - speed 100000 Mbps - full-duplex <Enable promiscuous mode>
> 
>  RX/TX processing lcore:   2 rx:  1 tx:  1
>  RX/TX processing lcore:   3 rx:  1 tx:  1
>  RX/TX processing lcore:   4 rx:  1 tx:  1
>  RX/TX processing lcore:   5 rx:  1 tx:  1
>  RX/TX processing lcore:   6 rx:  1 tx:  1
>  RX/TX processing lcore:   7 rx:  1 tx:  1
>  RX/TX processing lcore:   8 rx:  1 tx:  1
>  RX/TX processing lcore:   9 rx:  1 tx:  1
>  RX/TX processing lcore:  10 rx:  1 tx:  1
>  RX/TX processing lcore:  11 rx:  1 tx:  1
>  RX/TX processing lcore:  12 rx:  1 tx:  1
>  RX/TX processing lcore:  13 rx:  1 tx:  1
> 
> Note that the number of RX queues is correct.
> 
> I use the following commands to start generating traffic.  (The link partner is running DPDK 18.11.1 with the testpmd app configured for "io" forwarding.)
> 
> set all size 64
> set all rate 100
> set all count 0
> set all burst 16
> range all src port 1 1 1023 1
> range all dst ip 10.0.0.0 10.0.0.0 10.0.255.255 0.0.0.1
> range all src ip 10.0.0.0 10.0.0.0 10.0.255.255 0.0.0.1
> range 0 src mac 00:00:00:00:00:00 00:00:00:00:00:00 00:12:34:56:78:90 00:00:00:01:01:01
> range 0 dst mac 00:20:00:00:00:00 00:20:00:00:00:00 00:98:76:54:32:10 00:00:00:01:01:01
> range all size 64 64 64 0
> enable all range
> start all
> 
> Later, when I'm running traffic, the statistics show something different:
> 
> Pktgen:/> page stats
> |                  <Real Port Stats Page>  Copyright (c) <2010-2019>, Intel Corporation
> 
> Port 0                        Pkts Rx/Tx      Rx Errors/Missed  Rate Rx/Tx           MAC Address
> 0-04:00.0          542522040/2806635488                   0/0 10182516/51993536     EC:0D:9A:CA:B4:98
> 
>                 ipackets       opackets         ibytes         obytes        errors
>     Q  0:      542522040      546551712    32551322400    32793102720             0
>     Q  1:              0      451205888              0    27072353280             0
>     Q  2:              0      457296176              0    27437770560             0
>     Q  3:              0      455300832              0    27318049920             0
>     Q  4:              0      442654816              0    26559288960             0
>     Q  5:              0      453626064              0    27217563840             0
>     Q  6:              0              0              0              0             0
>     Q  7:              0              0              0              0             0
>     Q  8:              0              0              0              0             0
>     Q  9:              0              0              0              0             0
>     Q 10:              0              0              0              0             0
>     Q 11:              0              0              0              0             0
>     Q 12:              0              0              0              0             0
>     Q 13:              0              0              0              0             0
>     Q 14:              0              0              0              0             0
>     Q 15:              0              0              0              0             0
> -- Pktgen Ver: 3.6.6 (DPDK 18.11.1)  Powered by DPDK  (pid:15485)
> 
> Traffic is only received on RX queue 0.  Anyone run into this?  The link partner shows traffic received and transmitted on all configured queues (16 in this case) so I don't think the link partner is dropping traffic in such a way that the remaining traffic flows to a single RX queue on the SUT.

This normally means the RSS is not distributing the RX traffic to the others, which means the RX traffic is not varied enough to make RSS distribute the traffic. That would be my best guess I have not used the Mellanox cards.
> 
> Dave

Regards,
Keith


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] Pktgen Only Enables One RX Queue with Mellanox ConnectX-5
  2019-06-05 22:30 ` Wiles, Keith
@ 2019-06-05 23:10   ` David Christensen
  0 siblings, 0 replies; 3+ messages in thread
From: David Christensen @ 2019-06-05 23:10 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: dev

>> Traffic is only received on RX queue 0.  Anyone run into this?  The link partner shows traffic received and transmitted on all configured queues (16 in this case) so I don't think the link partner is dropping traffic in such a way that the remaining traffic flows to a single RX queue on the SUT.
> 
> This normally means the RSS is not distributing the RX traffic to the others, which means the RX traffic is not varied enough to make RSS distribute the traffic. That would be my best guess I have not used the Mellanox cards.

Remember that the traffic is being generated by pktgen.  Is there 
something I can improve with the "range" commands I used to start the 
test?  When I review the extended statistics from testpmd's "show port 
xstats all" command I see that traffic is being distributed correctly 
when received by the SUT (i.e. each per RX queue statistic is 
incrementing and is about the same value).  And the SUT uses the same 
MLX5 adapter that the pktgen host does.

Similarly, the per TX queue statistic is incrementing as well, 
suggesting that the traffic is being looped back to the pktgen host.  If 
there's a problem with RSS it would appear to be on the pktgen host 
side.  Is that expected?

I haven't looked into how RSS is enabled with DPDK but it looks like 
pktgen operates differently with respect to RSS on different versions of 
DPDK.  Does the following code snippet mean that RSS isn't enabled by 
pktgen on my build of DPDK v18.11?

app/pktgen-port-cfg.c:

     41 static struct rte_eth_conf default_port_conf = {$
     42 #if RTE_VERSION <= RTE_VERSION_NUM(18, 5, 0, 0)$
     43 | .rxmode = {$
     44 | | .mq_mode = ETH_MQ_RX_RSS,$
     45 | | .max_rx_pkt_len = ETHER_MAX_LEN,$
     46 | | .split_hdr_size = 0,$
     47 | | .ignore_offload_bitfield = 1,$
     48 | | .offloads = (DEV_RX_OFFLOAD_CRC_STRIP |$
     49 | | |      DEV_RX_OFFLOAD_CHECKSUM),$
     50 | },$
     51 | .rx_adv_conf = {$
     52 | | .rss_conf = {$
     53 | | | .rss_key = NULL,$
     54 | | | .rss_hf = ETH_RSS_IP,$
     55 | | },$
     56 | },$
     57 | .txmode = {$
     58 | | .mq_mode = ETH_MQ_TX_NONE,$
     59 | },$
     60 #else$
     61 | .rxmode = {$
     62 | | .split_hdr_size = 0,$

Dave


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-06-05 23:10 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-05 22:08 [dpdk-dev] Pktgen Only Enables One RX Queue with Mellanox ConnectX-5 David Christensen
2019-06-05 22:30 ` Wiles, Keith
2019-06-05 23:10   ` David Christensen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).