From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 5EF7A2C27 for ; Thu, 13 Apr 2017 15:49:11 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP; 13 Apr 2017 06:49:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.37,194,1488873600"; d="scan'208";a="1119063088" Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205]) by orsmga001.jf.intel.com with ESMTP; 13 Apr 2017 06:49:10 -0700 Received: from fmsmsx121.amr.corp.intel.com (10.18.125.36) by fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS) id 14.3.319.2; Thu, 13 Apr 2017 06:49:10 -0700 Received: from fmsmsx113.amr.corp.intel.com ([169.254.13.235]) by fmsmsx121.amr.corp.intel.com ([169.254.6.228]) with mapi id 14.03.0319.002; Thu, 13 Apr 2017 06:49:10 -0700 From: "Wiles, Keith" To: Shihabur Rahman Chowdhury CC: "users@dpdk.org" Thread-Topic: [dpdk-users] Low Rx throughput when using Mellanox ConnectX-3 card with DPDK Thread-Index: AQHSs+n07NXWTVjOGU+r234ehGQ3o6HDxveA Date: Thu, 13 Apr 2017 13:49:09 +0000 Message-ID: <81AE2A1C-C422-4616-AB18-E006D164FE5B@intel.com> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.254.67.28] Content-Type: text/plain; charset="us-ascii" Content-ID: <3CAF28550697C34EB57777631AF79DE9@intel.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-users] Low Rx throughput when using Mellanox ConnectX-3 card with DPDK X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 13 Apr 2017 13:49:13 -0000 > On Apr 12, 2017, at 7:06 PM, Shihabur Rahman Chowdhury wrote: >=20 > We've disabled the pause frames. That also disables flow control I assume= . Correct me if I am wrong. >=20 > On the pktgen side, the dropped field and overrun field for Rx keeps incr= easing for ifconfig. Btw, both overrun and dropped fields have the exact sa= me value always. Are you using the Linux kernel pktgen or the DPDK Pktgen? http://dpdk.org/browse/apps/pktgen-dpdk/ >=20 > Our ConnectX3 NIC has just a single 10G port. So there is no way to creat= e such loopback connection. >=20 > Thanks >=20 > Shihabur Rahman Chowdhury > David R. Cheriton School of Computer Science > University of Waterloo >=20 >=20 >=20 > On Wed, Apr 12, 2017 at 6:41 PM, Wiles, Keith wro= te: >=20 > > On Apr 12, 2017, at 4:00 PM, Shihabur Rahman Chowdhury wrote: > > > > Hello, > > > > We are running a simple DPDK application and observing quite low > > throughput. We are currently testing a DPDK application with the follow= ing > > setup > > > > - 2 machines with 2xIntel Xeon E5-2620 CPUs > > - Each machine with a Mellanox single port 10G ConnectX3 card > > - Mellanox DPDK version 16.11 > > - Mellanox OFED 4.0-2.0.0.1 and latest firmware for ConnectX3 > > > > The application is doing almost nothing. It is reading a batch of 64 > > packets from a single rxq, swapping the mac of each packet and writing = it > > back to a single txq. The rx and tx is being handled by separate lcores= on > > the same NUMA socket. We are running pktgen on another machine. With 64= B > > sized packets we are seeing ~14.8Mpps Tx rate and ~7.3Mpps Rx rate in > > pktgen. We checked the NIC on the machine running the DPDK application > > (with ifconfig) and it looks like there is a large number of packets be= ing > > dropped by the interface. Our connectx3 card should be theoretically be > > able to handle 10Gbps Rx + 10Gbps Tx throughput (with channel width 4, = the > > theoretical max on PCIe 3.0 should be ~31.2Gbps). Interestingly, when T= x > > rate is reduced in pktgent (to ~9Mpps), the Rx rate increases to ~9Mpps= . >=20 > Not sure what is going on here, when you drop the rate to 9Mpps I assume = you stop getting missed frames. > Do you have flow control enabled? >=20 > On the pktgen side are you seeing missed RX packets? > Did you loopback the cable from pktgen machine to the other port on the p= ktgen machine and did you get the same Rx/Tx performance in that configurat= ion? >=20 > > > > We would highly appriciate if we could get some pointers as to what can= be > > possibly causing this mismatch in Rx and Tx. Ideally, we should be able= to > > see ~14Mpps Rx well. Is it because we are using a single port? Or somet= hing > > else? > > > > FYI, we also ran the sample l2fwd application and test-pmd and got > > comparable results in the same setup. > > > > Thanks > > Shihab >=20 > Regards, > Keith >=20 >=20 Regards, Keith