From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 18B4C2C27 for ; Thu, 13 Apr 2017 00:41:48 +0200 (CEST) Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Apr 2017 15:41:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.37,191,1488873600"; d="scan'208";a="88418440" Received: from fmsmsx104.amr.corp.intel.com ([10.18.124.202]) by fmsmga006.fm.intel.com with ESMTP; 12 Apr 2017 15:41:47 -0700 Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by fmsmsx104.amr.corp.intel.com (10.18.124.202) with Microsoft SMTP Server (TLS) id 14.3.319.2; Wed, 12 Apr 2017 15:41:47 -0700 Received: from fmsmsx113.amr.corp.intel.com ([169.254.13.235]) by fmsmsx156.amr.corp.intel.com ([169.254.13.53]) with mapi id 14.03.0319.002; Wed, 12 Apr 2017 15:41:47 -0700 From: "Wiles, Keith" To: Shihabur Rahman Chowdhury CC: "users@dpdk.org" Thread-Topic: [dpdk-users] Low Rx throughput when using Mellanox ConnectX-3 card with DPDK Thread-Index: AQHSs8/s49XSXm1keESqt5v2lOsVr6HCyaYA Date: Wed, 12 Apr 2017 22:41:47 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.254.67.28] Content-Type: text/plain; charset="us-ascii" Content-ID: <329979E541F11B4CBFE2114A996109D8@intel.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-users] Low Rx throughput when using Mellanox ConnectX-3 card with DPDK X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Apr 2017 22:41:49 -0000 > On Apr 12, 2017, at 4:00 PM, Shihabur Rahman Chowdhury wrote: >=20 > Hello, >=20 > We are running a simple DPDK application and observing quite low > throughput. We are currently testing a DPDK application with the followin= g > setup >=20 > - 2 machines with 2xIntel Xeon E5-2620 CPUs > - Each machine with a Mellanox single port 10G ConnectX3 card > - Mellanox DPDK version 16.11 > - Mellanox OFED 4.0-2.0.0.1 and latest firmware for ConnectX3 >=20 > The application is doing almost nothing. It is reading a batch of 64 > packets from a single rxq, swapping the mac of each packet and writing it > back to a single txq. The rx and tx is being handled by separate lcores o= n > the same NUMA socket. We are running pktgen on another machine. With 64B > sized packets we are seeing ~14.8Mpps Tx rate and ~7.3Mpps Rx rate in > pktgen. We checked the NIC on the machine running the DPDK application > (with ifconfig) and it looks like there is a large number of packets bein= g > dropped by the interface. Our connectx3 card should be theoretically be > able to handle 10Gbps Rx + 10Gbps Tx throughput (with channel width 4, th= e > theoretical max on PCIe 3.0 should be ~31.2Gbps). Interestingly, when Tx > rate is reduced in pktgent (to ~9Mpps), the Rx rate increases to ~9Mpps. Not sure what is going on here, when you drop the rate to 9Mpps I assume yo= u stop getting missed frames. Do you have flow control enabled? On the pktgen side are you seeing missed RX packets? Did you loopback the cable from pktgen machine to the other port on the pkt= gen machine and did you get the same Rx/Tx performance in that configuratio= n? >=20 > We would highly appriciate if we could get some pointers as to what can b= e > possibly causing this mismatch in Rx and Tx. Ideally, we should be able t= o > see ~14Mpps Rx well. Is it because we are using a single port? Or somethi= ng > else? >=20 > FYI, we also ran the sample l2fwd application and test-pmd and got > comparable results in the same setup. >=20 > Thanks > Shihab Regards, Keith