From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id E862FA0096 for ; Thu, 6 Jun 2019 00:30:34 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E919E1B9CF; Thu, 6 Jun 2019 00:30:33 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id D9B6F1B9C6 for ; Thu, 6 Jun 2019 00:30:31 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Jun 2019 15:30:30 -0700 X-ExtLoop1: 1 Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2019 15:30:30 -0700 Received: from fmsmsx123.amr.corp.intel.com (10.18.125.38) by fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS) id 14.3.408.0; Wed, 5 Jun 2019 15:30:27 -0700 Received: from fmsmsx117.amr.corp.intel.com ([169.254.3.77]) by fmsmsx123.amr.corp.intel.com ([169.254.7.159]) with mapi id 14.03.0415.000; Wed, 5 Jun 2019 15:30:27 -0700 From: "Wiles, Keith" To: David Christensen CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] Pktgen Only Enables One RX Queue with Mellanox ConnectX-5 Thread-Index: AQHVG+tZOAxTRw9f50WptGV8TXvaMaaOGg0A Date: Wed, 5 Jun 2019 22:30:26 +0000 Message-ID: References: <671a1e2e-cf45-e86f-01ef-3c05530ab067@linux.vnet.ibm.com> In-Reply-To: <671a1e2e-cf45-e86f-01ef-3c05530ab067@linux.vnet.ibm.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.255.75.4] Content-Type: text/plain; charset="us-ascii" Content-ID: <2B73BAF25359724E993CFFDE80881EEB@intel.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] Pktgen Only Enables One RX Queue with Mellanox ConnectX-5 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > On Jun 5, 2019, at 5:08 PM, David Christensen wr= ote: >=20 > Using a Mellanox ConnectX-5 @ 100Gb on a Supermicro system with DPDK 18.1= 1.1 and pktgen 3.6.6 from the Master branch (downloaded today). When I conf= igure pktgen to use multiple queues, I'm able to set the number of TX queue= s correctly but the number of RX queues doesn't look correct. pktgen indic= ates the correct number of queues during startup, but the stats and xstats = pages always show traffic going to a single RX queue. >=20 > For example, here's how pktgen starts: >=20 > $ sudo -E LD_LIBRARY_PATH=3D/home/davec/src/dpdk/x86_64-native-linuxapp-g= cc/lib /home/davec/src/pktgen/app/x86_64-native-linuxapp-gcc/pktgen -l 1,2-= 13 -w 04:00.0 -w 04:00.1 -n 3 -- -P -m "[2-7:2-7].0, [8-13:8-13].1"; >=20 > Copyright (c) <2010-2019>, Intel Corporation. All rights reserved. Powere= d by DPDK > EAL: Detected 56 lcore(s) > EAL: Detected 2 NUMA nodes > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Probing VFIO support... > EAL: PCI device 0000:04:00.0 on NUMA socket 0 > EAL: probe driver: 15b3:1019 net_mlx5 > EAL: PCI device 0000:04:00.1 on NUMA socket 0 > EAL: probe driver: 15b3:1019 net_mlx5 > Lua 5.3.5 Copyright (C) 1994-2018 Lua.org, PUC-Rio >=20 > *** Copyright (c) <2010-2019>, Intel Corporation. All rights reserved. > *** Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<< >=20 > Initialize Port 0 -- TxQ 6, RxQ 6, Src MAC ec:0d:9a:ca:b4:98 > Initialize Port 1 -- TxQ 6, RxQ 6, Src MAC ec:0d:9a:ca:b4:99 >=20 > Port 0: Link Up - speed 100000 Mbps - full-duplex > Port 1: Link Up - speed 100000 Mbps - full-duplex >=20 > RX/TX processing lcore: 2 rx: 1 tx: 1 > RX/TX processing lcore: 3 rx: 1 tx: 1 > RX/TX processing lcore: 4 rx: 1 tx: 1 > RX/TX processing lcore: 5 rx: 1 tx: 1 > RX/TX processing lcore: 6 rx: 1 tx: 1 > RX/TX processing lcore: 7 rx: 1 tx: 1 > RX/TX processing lcore: 8 rx: 1 tx: 1 > RX/TX processing lcore: 9 rx: 1 tx: 1 > RX/TX processing lcore: 10 rx: 1 tx: 1 > RX/TX processing lcore: 11 rx: 1 tx: 1 > RX/TX processing lcore: 12 rx: 1 tx: 1 > RX/TX processing lcore: 13 rx: 1 tx: 1 >=20 > Note that the number of RX queues is correct. >=20 > I use the following commands to start generating traffic. (The link part= ner is running DPDK 18.11.1 with the testpmd app configured for "io" forwar= ding.) >=20 > set all size 64 > set all rate 100 > set all count 0 > set all burst 16 > range all src port 1 1 1023 1 > range all dst ip 10.0.0.0 10.0.0.0 10.0.255.255 0.0.0.1 > range all src ip 10.0.0.0 10.0.0.0 10.0.255.255 0.0.0.1 > range 0 src mac 00:00:00:00:00:00 00:00:00:00:00:00 00:12:34:56:78:90 00:= 00:00:01:01:01 > range 0 dst mac 00:20:00:00:00:00 00:20:00:00:00:00 00:98:76:54:32:10 00:= 00:00:01:01:01 > range all size 64 64 64 0 > enable all range > start all >=20 > Later, when I'm running traffic, the statistics show something different: >=20 > Pktgen:/> page stats > | Copyright (c) <2010-2019>, Int= el Corporation >=20 > Port 0 Pkts Rx/Tx Rx Errors/Missed Rate Rx/T= x MAC Address > 0-04:00.0 542522040/2806635488 0/0 10182516/51= 993536 EC:0D:9A:CA:B4:98 >=20 > ipackets opackets ibytes obytes = errors > Q 0: 542522040 546551712 32551322400 32793102720 = 0 > Q 1: 0 451205888 0 27072353280 = 0 > Q 2: 0 457296176 0 27437770560 = 0 > Q 3: 0 455300832 0 27318049920 = 0 > Q 4: 0 442654816 0 26559288960 = 0 > Q 5: 0 453626064 0 27217563840 = 0 > Q 6: 0 0 0 0 = 0 > Q 7: 0 0 0 0 = 0 > Q 8: 0 0 0 0 = 0 > Q 9: 0 0 0 0 = 0 > Q 10: 0 0 0 0 = 0 > Q 11: 0 0 0 0 = 0 > Q 12: 0 0 0 0 = 0 > Q 13: 0 0 0 0 = 0 > Q 14: 0 0 0 0 = 0 > Q 15: 0 0 0 0 = 0 > -- Pktgen Ver: 3.6.6 (DPDK 18.11.1) Powered by DPDK (pid:15485) >=20 > Traffic is only received on RX queue 0. Anyone run into this? The link = partner shows traffic received and transmitted on all configured queues (16= in this case) so I don't think the link partner is dropping traffic in suc= h a way that the remaining traffic flows to a single RX queue on the SUT. This normally means the RSS is not distributing the RX traffic to the other= s, which means the RX traffic is not varied enough to make RSS distribute t= he traffic. That would be my best guess I have not used the Mellanox cards. >=20 > Dave Regards, Keith