From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 3BE78A0096 for ; Thu, 6 Jun 2019 00:09:06 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 569101B9CF; Thu, 6 Jun 2019 00:09:05 +0200 (CEST) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by dpdk.org (Postfix) with ESMTP id B06441B9CE for ; Thu, 6 Jun 2019 00:09:04 +0200 (CEST) Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x55M6QLN007849 for ; Wed, 5 Jun 2019 18:09:04 -0400 Received: from ppma01dal.us.ibm.com (83.d6.3fa9.ip4.static.sl-reverse.com [169.63.214.131]) by mx0b-001b2d01.pphosted.com with ESMTP id 2sxp6v85b6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 05 Jun 2019 18:09:03 -0400 Received: from pps.filterd (ppma01dal.us.ibm.com [127.0.0.1]) by ppma01dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id x55Je293013952 for ; Wed, 5 Jun 2019 19:43:16 GMT Received: from b03cxnp07029.gho.boulder.ibm.com (b03cxnp07029.gho.boulder.ibm.com [9.17.130.16]) by ppma01dal.us.ibm.com with ESMTP id 2swyby3ga4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 05 Jun 2019 19:43:15 +0000 Received: from b03ledav001.gho.boulder.ibm.com (b03ledav001.gho.boulder.ibm.com [9.17.130.232]) by b03cxnp07029.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x55M8wS120709678 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Wed, 5 Jun 2019 22:08:58 GMT Received: from b03ledav001.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B76D26E095 for ; Wed, 5 Jun 2019 22:08:58 +0000 (GMT) Received: from b03ledav001.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 87B016E092 for ; Wed, 5 Jun 2019 22:08:58 +0000 (GMT) Received: from davids-mbp.usor.ibm.com (unknown [9.70.84.137]) by b03ledav001.gho.boulder.ibm.com (Postfix) with ESMTP for ; Wed, 5 Jun 2019 22:08:58 +0000 (GMT) To: "dev@dpdk.org" From: David Christensen Message-ID: <671a1e2e-cf45-e86f-01ef-3c05530ab067@linux.vnet.ibm.com> Date: Wed, 5 Jun 2019 15:08:57 -0700 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:60.0) Gecko/20100101 Thunderbird/60.7.0 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-05_14:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1906050140 Subject: [dpdk-dev] Pktgen Only Enables One RX Queue with Mellanox ConnectX-5 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Using a Mellanox ConnectX-5 @ 100Gb on a Supermicro system with DPDK 18.11.1 and pktgen 3.6.6 from the Master branch (downloaded today). When I configure pktgen to use multiple queues, I'm able to set the number of TX queues correctly but the number of RX queues doesn't look correct. pktgen indicates the correct number of queues during startup, but the stats and xstats pages always show traffic going to a single RX queue. For example, here's how pktgen starts: $ sudo -E LD_LIBRARY_PATH=/home/davec/src/dpdk/x86_64-native-linuxapp-gcc/lib /home/davec/src/pktgen/app/x86_64-native-linuxapp-gcc/pktgen -l 1,2-13 -w 04:00.0 -w 04:00.1 -n 3 -- -P -m "[2-7:2-7].0, [8-13:8-13].1"; Copyright (c) <2010-2019>, Intel Corporation. All rights reserved. Powered by DPDK EAL: Detected 56 lcore(s) EAL: Detected 2 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Probing VFIO support... EAL: PCI device 0000:04:00.0 on NUMA socket 0 EAL: probe driver: 15b3:1019 net_mlx5 EAL: PCI device 0000:04:00.1 on NUMA socket 0 EAL: probe driver: 15b3:1019 net_mlx5 Lua 5.3.5 Copyright (C) 1994-2018 Lua.org, PUC-Rio *** Copyright (c) <2010-2019>, Intel Corporation. All rights reserved. *** Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<< Initialize Port 0 -- TxQ 6, RxQ 6, Src MAC ec:0d:9a:ca:b4:98 Initialize Port 1 -- TxQ 6, RxQ 6, Src MAC ec:0d:9a:ca:b4:99 Port 0: Link Up - speed 100000 Mbps - full-duplex Port 1: Link Up - speed 100000 Mbps - full-duplex RX/TX processing lcore: 2 rx: 1 tx: 1 RX/TX processing lcore: 3 rx: 1 tx: 1 RX/TX processing lcore: 4 rx: 1 tx: 1 RX/TX processing lcore: 5 rx: 1 tx: 1 RX/TX processing lcore: 6 rx: 1 tx: 1 RX/TX processing lcore: 7 rx: 1 tx: 1 RX/TX processing lcore: 8 rx: 1 tx: 1 RX/TX processing lcore: 9 rx: 1 tx: 1 RX/TX processing lcore: 10 rx: 1 tx: 1 RX/TX processing lcore: 11 rx: 1 tx: 1 RX/TX processing lcore: 12 rx: 1 tx: 1 RX/TX processing lcore: 13 rx: 1 tx: 1 Note that the number of RX queues is correct. I use the following commands to start generating traffic. (The link partner is running DPDK 18.11.1 with the testpmd app configured for "io" forwarding.) set all size 64 set all rate 100 set all count 0 set all burst 16 range all src port 1 1 1023 1 range all dst ip 10.0.0.0 10.0.0.0 10.0.255.255 0.0.0.1 range all src ip 10.0.0.0 10.0.0.0 10.0.255.255 0.0.0.1 range 0 src mac 00:00:00:00:00:00 00:00:00:00:00:00 00:12:34:56:78:90 00:00:00:01:01:01 range 0 dst mac 00:20:00:00:00:00 00:20:00:00:00:00 00:98:76:54:32:10 00:00:00:01:01:01 range all size 64 64 64 0 enable all range start all Later, when I'm running traffic, the statistics show something different: Pktgen:/> page stats | Copyright (c) <2010-2019>, Intel Corporation Port 0 Pkts Rx/Tx Rx Errors/Missed Rate Rx/Tx MAC Address 0-04:00.0 542522040/2806635488 0/0 10182516/51993536 EC:0D:9A:CA:B4:98 ipackets opackets ibytes obytes errors Q 0: 542522040 546551712 32551322400 32793102720 0 Q 1: 0 451205888 0 27072353280 0 Q 2: 0 457296176 0 27437770560 0 Q 3: 0 455300832 0 27318049920 0 Q 4: 0 442654816 0 26559288960 0 Q 5: 0 453626064 0 27217563840 0 Q 6: 0 0 0 0 0 Q 7: 0 0 0 0 0 Q 8: 0 0 0 0 0 Q 9: 0 0 0 0 0 Q 10: 0 0 0 0 0 Q 11: 0 0 0 0 0 Q 12: 0 0 0 0 0 Q 13: 0 0 0 0 0 Q 14: 0 0 0 0 0 Q 15: 0 0 0 0 0 -- Pktgen Ver: 3.6.6 (DPDK 18.11.1) Powered by DPDK (pid:15485) Traffic is only received on RX queue 0. Anyone run into this? The link partner shows traffic received and transmitted on all configured queues (16 in this case) so I don't think the link partner is dropping traffic in such a way that the remaining traffic flows to a single RX queue on the SUT. Dave