From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 1B1EEDED for ; Tue, 28 Aug 2018 21:09:37 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Aug 2018 12:09:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,300,1531810800"; d="scan'208";a="69819083" Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205]) by orsmga006.jf.intel.com with ESMTP; 28 Aug 2018 12:09:25 -0700 Received: from fmsmsx151.amr.corp.intel.com (10.18.125.4) by fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS) id 14.3.319.2; Tue, 28 Aug 2018 12:09:16 -0700 Received: from fmsmsx117.amr.corp.intel.com ([169.254.3.210]) by FMSMSX151.amr.corp.intel.com ([169.254.7.228]) with mapi id 14.03.0319.002; Tue, 28 Aug 2018 12:09:16 -0700 From: "Wiles, Keith" To: Saber Rezvani CC: Stephen Hemminger , "dev@dpdk.org" Thread-Topic: [dpdk-dev] IXGBE throughput loss with 4+ cores Thread-Index: AQHUPs/QMjQLTF9M/UuTvX3eo9I1kaTVyIsAgAAR14CAACKNgA== Date: Tue, 28 Aug 2018 19:09:15 +0000 Message-ID: References: <74400e6a-91ba-3648-0980-47ceae1089a7@zoho.com> <20180828090142.1262c5ea@shemminger-XPS-13-9360> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.252.202.10] Content-Type: text/plain; charset="us-ascii" Content-ID: <16FBC0FE760B654EBE06BC43B05300F5@intel.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] IXGBE throughput loss with 4+ cores X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Aug 2018 19:09:38 -0000 Which version of Pktgen? I just pushed a patch in 3.5.3 to fix a performan= ce problem. > On Aug 28, 2018, at 12:05 PM, Saber Rezvani wrote: >=20 >=20 >=20 > On 08/28/2018 08:31 PM, Stephen Hemminger wrote: >> On Tue, 28 Aug 2018 17:34:27 +0430 >> Saber Rezvani wrote: >>=20 >>> Hi, >>>=20 >>>=20 >>> I have run multi_process/symmetric_mp example in DPDK example directory= . >>> For a one process its throughput is line rate but as I increase the >>> number of cores I see decrease in throughput. For example, If the numbe= r >>> of queues set to 4 and each queue assigns to a single core, then the >>> throughput will be something about 9.4. if 8 queues, then throughput >>> will be 8.5. >>>=20 >>> I have read the following, but it was not convincing. >>>=20 >>> http://mails.dpdk.org/archives/dev/2015-October/024960.html >>>=20 >>>=20 >>> I am eagerly looking forward to hearing from you, all. >>>=20 >>>=20 >>> Best wishes, >>>=20 >>> Saber >>>=20 >>>=20 >> Not completely surprising. If you have more cores than packet line rate >> then the number of packets returned for each call to rx_burst will be le= ss. >> With large number of cores, most of the time will be spent doing reads o= f >> PCI registers for no packets! > Indeed pktgen says it is generating traffic at line rate, but receiving l= ess than 10 Gb/s. So, it that case there should be something that causes th= e reduction in throughput :( >=20 >=20 Regards, Keith