From: "Wiles, Keith" <keith.wiles@intel.com>
To: David Christensen <drc@linux.vnet.ibm.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Pktgen Only Enables One RX Queue with Mellanox ConnectX-5
Date: Wed, 5 Jun 2019 22:30:26 +0000 [thread overview]
Message-ID: <FFD39791-1147-42F8-90FC-EE94B46B633C@intel.com> (raw)
In-Reply-To: <671a1e2e-cf45-e86f-01ef-3c05530ab067@linux.vnet.ibm.com>
> On Jun 5, 2019, at 5:08 PM, David Christensen <drc@linux.vnet.ibm.com> wrote:
>
> Using a Mellanox ConnectX-5 @ 100Gb on a Supermicro system with DPDK 18.11.1 and pktgen 3.6.6 from the Master branch (downloaded today). When I configure pktgen to use multiple queues, I'm able to set the number of TX queues correctly but the number of RX queues doesn't look correct. pktgen indicates the correct number of queues during startup, but the stats and xstats pages always show traffic going to a single RX queue.
>
> For example, here's how pktgen starts:
>
> $ sudo -E LD_LIBRARY_PATH=/home/davec/src/dpdk/x86_64-native-linuxapp-gcc/lib /home/davec/src/pktgen/app/x86_64-native-linuxapp-gcc/pktgen -l 1,2-13 -w 04:00.0 -w 04:00.1 -n 3 -- -P -m "[2-7:2-7].0, [8-13:8-13].1";
>
> Copyright (c) <2010-2019>, Intel Corporation. All rights reserved. Powered by DPDK
> EAL: Detected 56 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Probing VFIO support...
> EAL: PCI device 0000:04:00.0 on NUMA socket 0
> EAL: probe driver: 15b3:1019 net_mlx5
> EAL: PCI device 0000:04:00.1 on NUMA socket 0
> EAL: probe driver: 15b3:1019 net_mlx5
> Lua 5.3.5 Copyright (C) 1994-2018 Lua.org, PUC-Rio
>
> *** Copyright (c) <2010-2019>, Intel Corporation. All rights reserved.
> *** Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<
>
> Initialize Port 0 -- TxQ 6, RxQ 6, Src MAC ec:0d:9a:ca:b4:98
> Initialize Port 1 -- TxQ 6, RxQ 6, Src MAC ec:0d:9a:ca:b4:99
>
> Port 0: Link Up - speed 100000 Mbps - full-duplex <Enable promiscuous mode>
> Port 1: Link Up - speed 100000 Mbps - full-duplex <Enable promiscuous mode>
>
> RX/TX processing lcore: 2 rx: 1 tx: 1
> RX/TX processing lcore: 3 rx: 1 tx: 1
> RX/TX processing lcore: 4 rx: 1 tx: 1
> RX/TX processing lcore: 5 rx: 1 tx: 1
> RX/TX processing lcore: 6 rx: 1 tx: 1
> RX/TX processing lcore: 7 rx: 1 tx: 1
> RX/TX processing lcore: 8 rx: 1 tx: 1
> RX/TX processing lcore: 9 rx: 1 tx: 1
> RX/TX processing lcore: 10 rx: 1 tx: 1
> RX/TX processing lcore: 11 rx: 1 tx: 1
> RX/TX processing lcore: 12 rx: 1 tx: 1
> RX/TX processing lcore: 13 rx: 1 tx: 1
>
> Note that the number of RX queues is correct.
>
> I use the following commands to start generating traffic. (The link partner is running DPDK 18.11.1 with the testpmd app configured for "io" forwarding.)
>
> set all size 64
> set all rate 100
> set all count 0
> set all burst 16
> range all src port 1 1 1023 1
> range all dst ip 10.0.0.0 10.0.0.0 10.0.255.255 0.0.0.1
> range all src ip 10.0.0.0 10.0.0.0 10.0.255.255 0.0.0.1
> range 0 src mac 00:00:00:00:00:00 00:00:00:00:00:00 00:12:34:56:78:90 00:00:00:01:01:01
> range 0 dst mac 00:20:00:00:00:00 00:20:00:00:00:00 00:98:76:54:32:10 00:00:00:01:01:01
> range all size 64 64 64 0
> enable all range
> start all
>
> Later, when I'm running traffic, the statistics show something different:
>
> Pktgen:/> page stats
> | <Real Port Stats Page> Copyright (c) <2010-2019>, Intel Corporation
>
> Port 0 Pkts Rx/Tx Rx Errors/Missed Rate Rx/Tx MAC Address
> 0-04:00.0 542522040/2806635488 0/0 10182516/51993536 EC:0D:9A:CA:B4:98
>
> ipackets opackets ibytes obytes errors
> Q 0: 542522040 546551712 32551322400 32793102720 0
> Q 1: 0 451205888 0 27072353280 0
> Q 2: 0 457296176 0 27437770560 0
> Q 3: 0 455300832 0 27318049920 0
> Q 4: 0 442654816 0 26559288960 0
> Q 5: 0 453626064 0 27217563840 0
> Q 6: 0 0 0 0 0
> Q 7: 0 0 0 0 0
> Q 8: 0 0 0 0 0
> Q 9: 0 0 0 0 0
> Q 10: 0 0 0 0 0
> Q 11: 0 0 0 0 0
> Q 12: 0 0 0 0 0
> Q 13: 0 0 0 0 0
> Q 14: 0 0 0 0 0
> Q 15: 0 0 0 0 0
> -- Pktgen Ver: 3.6.6 (DPDK 18.11.1) Powered by DPDK (pid:15485)
>
> Traffic is only received on RX queue 0. Anyone run into this? The link partner shows traffic received and transmitted on all configured queues (16 in this case) so I don't think the link partner is dropping traffic in such a way that the remaining traffic flows to a single RX queue on the SUT.
This normally means the RSS is not distributing the RX traffic to the others, which means the RX traffic is not varied enough to make RSS distribute the traffic. That would be my best guess I have not used the Mellanox cards.
>
> Dave
Regards,
Keith
next prev parent reply other threads:[~2019-06-05 22:30 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-05 22:08 David Christensen
2019-06-05 22:30 ` Wiles, Keith [this message]
2019-06-05 23:10 ` David Christensen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=FFD39791-1147-42F8-90FC-EE94B46B633C@intel.com \
--to=keith.wiles@intel.com \
--cc=dev@dpdk.org \
--cc=drc@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).