From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id B48FC1B4B2 for ; Tue, 16 Apr 2019 14:54:52 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Apr 2019 05:54:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,357,1549958400"; d="scan'208";a="136257493" Received: from irsmsx152.ger.corp.intel.com ([163.33.192.66]) by orsmga006.jf.intel.com with ESMTP; 16 Apr 2019 05:54:49 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.31]) by IRSMSX152.ger.corp.intel.com ([169.254.6.225]) with mapi id 14.03.0415.000; Tue, 16 Apr 2019 13:54:48 +0100 From: "Ananyev, Konstantin" To: Shreyansh Jain , "Ruifeng Wang (Arm Technology China)" , "dev@dpdk.org" CC: nd Thread-Topic: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port Thread-Index: AdT0Udh9aIUoRjuicUqwMQcvs709jQAAYFQg Date: Tue, 16 Apr 2019 12:54:48 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772580148A98609@irsmsx105.ger.corp.intel.com> References: In-Reply-To: Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZjAzNGQ0ZGMtNDc3ZS00YTE4LWJiMjktNmU5MjViYzBjNjI3IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoibW11Mko5R2d2Y3RYVmFlSzdsRjJ0WU41djROaU1ZaGlTbVBtRjA2UmRNb0RTN1g5dWZmN2RnZDcwVkVmcXdHNSJ9 x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.600.7 dlp-reaction: no-action x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 Apr 2019 12:54:53 -0000 > -----Original Message----- > From: Shreyansh Jain [mailto:shreyansh.jain@nxp.com] > Sent: Tuesday, April 16, 2019 1:48 PM > To: Ananyev, Konstantin ; Ruifeng Wang (Arm= Technology China) ; > dev@dpdk.org > Cc: nd > Subject: RE: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer p= ool per port >=20 > Hello Ananyev, >=20 > > Hi Shreyansh, > > > > > > > I tried this patch on MacchiatoBin + 82599 NIC. > > > > > Compared with global-pool mode, per-port-pool mode showed slightl= y > > > > lower performance in single core test. > > > > > > > > That was my thought too - for the case when queues from multiple > > ports > > > > are handled by the same core > > > > it probably would only slowdown things. > > > > > > Thanks for your comments. > > > > > > This is applicable for cases where separate cores can handle separate > > ports - each with their pools. (somehow I felt that message in commit > > > was adequate - I can rephrase if that is misleading) > > > > > > In case there is enough number of cores available for datapath, such > > segregation can result in better performance - possibly because of > > > drop in pool and cache conflicts. > > > At least on some of NXP SoC, this resulted in over 15% improvement. > > > And, in other cases it didn't lead to any drop/negative-impact. > > > > If each core manages just one port, then yes definitely performance > > increase is expected. > > If that's the case you'd like enable, then can I suggest to have mempoo= l > > per lcore not per port? >=20 > As you have stated below, it's just the same thing with two different vie= ws. >=20 > > I think it would be plausible for both cases: > > - one port per core (your case). > > - multiple ports per core. >=20 > Indeed. For this particular patch, I just chose the first one. Probably b= ecause that is the most general use-case I come across. > I am sure the second too has equal number of possible use-cases - but pro= bably someone with access to that kind of scenario would be > better suited for validating what is the performance increase. > Do you think it would be OK to have that in and then sometime in future e= nable the second option? What I am trying to say - if we'll have mempool per lcore (not per port), then it would cover both cases above. So wouldn't need to make extra changes. Konstantin From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 47639A00E6 for ; Tue, 16 Apr 2019 14:54:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B44981B4B7; Tue, 16 Apr 2019 14:54:54 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id B48FC1B4B2 for ; Tue, 16 Apr 2019 14:54:52 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Apr 2019 05:54:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,357,1549958400"; d="scan'208";a="136257493" Received: from irsmsx152.ger.corp.intel.com ([163.33.192.66]) by orsmga006.jf.intel.com with ESMTP; 16 Apr 2019 05:54:49 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.31]) by IRSMSX152.ger.corp.intel.com ([169.254.6.225]) with mapi id 14.03.0415.000; Tue, 16 Apr 2019 13:54:48 +0100 From: "Ananyev, Konstantin" To: Shreyansh Jain , "Ruifeng Wang (Arm Technology China)" , "dev@dpdk.org" CC: nd Thread-Topic: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port Thread-Index: AdT0Udh9aIUoRjuicUqwMQcvs709jQAAYFQg Date: Tue, 16 Apr 2019 12:54:48 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772580148A98609@irsmsx105.ger.corp.intel.com> References: In-Reply-To: Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZjAzNGQ0ZGMtNDc3ZS00YTE4LWJiMjktNmU5MjViYzBjNjI3IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoibW11Mko5R2d2Y3RYVmFlSzdsRjJ0WU41djROaU1ZaGlTbVBtRjA2UmRNb0RTN1g5dWZmN2RnZDcwVkVmcXdHNSJ9 x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.600.7 dlp-reaction: no-action x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190416125448.eh33Krhoklv2R1n4y3pykOD1xEiRfMGW3ce0vqn4RyA@z> > -----Original Message----- > From: Shreyansh Jain [mailto:shreyansh.jain@nxp.com] > Sent: Tuesday, April 16, 2019 1:48 PM > To: Ananyev, Konstantin ; Ruifeng Wang (Arm= Technology China) ; > dev@dpdk.org > Cc: nd > Subject: RE: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer p= ool per port >=20 > Hello Ananyev, >=20 > > Hi Shreyansh, > > > > > > > I tried this patch on MacchiatoBin + 82599 NIC. > > > > > Compared with global-pool mode, per-port-pool mode showed slightl= y > > > > lower performance in single core test. > > > > > > > > That was my thought too - for the case when queues from multiple > > ports > > > > are handled by the same core > > > > it probably would only slowdown things. > > > > > > Thanks for your comments. > > > > > > This is applicable for cases where separate cores can handle separate > > ports - each with their pools. (somehow I felt that message in commit > > > was adequate - I can rephrase if that is misleading) > > > > > > In case there is enough number of cores available for datapath, such > > segregation can result in better performance - possibly because of > > > drop in pool and cache conflicts. > > > At least on some of NXP SoC, this resulted in over 15% improvement. > > > And, in other cases it didn't lead to any drop/negative-impact. > > > > If each core manages just one port, then yes definitely performance > > increase is expected. > > If that's the case you'd like enable, then can I suggest to have mempoo= l > > per lcore not per port? >=20 > As you have stated below, it's just the same thing with two different vie= ws. >=20 > > I think it would be plausible for both cases: > > - one port per core (your case). > > - multiple ports per core. >=20 > Indeed. For this particular patch, I just chose the first one. Probably b= ecause that is the most general use-case I come across. > I am sure the second too has equal number of possible use-cases - but pro= bably someone with access to that kind of scenario would be > better suited for validating what is the performance increase. > Do you think it would be OK to have that in and then sometime in future e= nable the second option? What I am trying to say - if we'll have mempool per lcore (not per port), then it would cover both cases above. So wouldn't need to make extra changes. Konstantin