From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id B97B81B5EA for ; Wed, 17 Apr 2019 13:22:03 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Apr 2019 04:22:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,361,1549958400"; d="scan'208";a="150138432" Received: from irsmsx153.ger.corp.intel.com ([163.33.192.75]) by FMSMGA003.fm.intel.com with ESMTP; 17 Apr 2019 04:22:01 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.31]) by IRSMSX153.ger.corp.intel.com ([169.254.9.61]) with mapi id 14.03.0415.000; Wed, 17 Apr 2019 12:22:00 +0100 From: "Ananyev, Konstantin" To: Shreyansh Jain , "Ruifeng Wang (Arm Technology China)" , "dev@dpdk.org" CC: nd Thread-Topic: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port Thread-Index: AdT0a2FDaIUoRjuicUqwMQcvs709jQAoZ2HQ Date: Wed, 17 Apr 2019 11:21:59 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772580148A98E25@irsmsx105.ger.corp.intel.com> References: In-Reply-To: Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNDQ2NjdkNzctOTdlMS00NGRlLThlMzUtMDNlZDcxYjQyMGZjIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoicVptMGlyU1pLR29xTDN0N2dXckpZY1l0VElpZndpeTVVMG4rcFwvdDU3K3NHXC9VeDBueDRjbWFUQytrdE5Sc3pjIn0= x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.600.7 dlp-reaction: no-action x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Apr 2019 11:22:04 -0000 Hi=20 =20 > > > As you have stated below, it's just the same thing with two different > > views. > > > > > > > I think it would be plausible for both cases: > > > > - one port per core (your case). > > > > - multiple ports per core. > > > > > > Indeed. For this particular patch, I just chose the first one. > > Probably because that is the most general use-case I come across. > > > I am sure the second too has equal number of possible use-cases - but > > probably someone with access to that kind of scenario would be > > > better suited for validating what is the performance increase. > > > Do you think it would be OK to have that in and then sometime in > > future enable the second option? > > > > What I am trying to say - if we'll have mempool per lcore (not per > > port), > > then it would cover both cases above. > > So wouldn't need to make extra changes. > > Konstantin >=20 > What you are suggesting would end up as 1:N mapping of port:pool (when mu= ltiple queues are being used for a port, each affined to > different core).=20 Yes. Probably there is some misunderstanding from my part. >>From your previous mail: "This is applicable for cases where separate cores can handle separate ports - each with their pools. (somehow I felt that message in commit was adequate - I can rephrase if that is misleading)" I made a conclusion (probably wrong) that the only config you are interested (one that shows performance improv= ement): when all queues of each port is managed by the same core and each core manages only one port. >>From that perspective - it doesn't matter would we have pool per core or pe= r port - we will still end-up with a separate pool per port (and core). But probably that conclusion was wrong. > In my observation, or rather the cases I generally see, that would end up= reducing performance. Especially hardware pools > work best when pool:port are co-located. For generic pools (SW based) having one pool per core should definitely be = faster than multiple ones. For HW based pools - I can't say much, as I don't have such HW to try. =20 > At least for me this option of setting multiple buffer pools against lcor= es in l3fwd is NOT a preferred use-case. Which leads me to conclude > that we would anyways need both way mapping: pool-per-port and pool-per-c= ore, to cover larger number of use-cases (at least, yours and > mine). If my conclusion above was wrong, then yes. Konstantin From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 575FDA00E6 for ; Wed, 17 Apr 2019 13:22:07 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B27B41B5EC; Wed, 17 Apr 2019 13:22:05 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id B97B81B5EA for ; Wed, 17 Apr 2019 13:22:03 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Apr 2019 04:22:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,361,1549958400"; d="scan'208";a="150138432" Received: from irsmsx153.ger.corp.intel.com ([163.33.192.75]) by FMSMGA003.fm.intel.com with ESMTP; 17 Apr 2019 04:22:01 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.31]) by IRSMSX153.ger.corp.intel.com ([169.254.9.61]) with mapi id 14.03.0415.000; Wed, 17 Apr 2019 12:22:00 +0100 From: "Ananyev, Konstantin" To: Shreyansh Jain , "Ruifeng Wang (Arm Technology China)" , "dev@dpdk.org" CC: nd Thread-Topic: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port Thread-Index: AdT0a2FDaIUoRjuicUqwMQcvs709jQAoZ2HQ Date: Wed, 17 Apr 2019 11:21:59 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772580148A98E25@irsmsx105.ger.corp.intel.com> References: In-Reply-To: Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNDQ2NjdkNzctOTdlMS00NGRlLThlMzUtMDNlZDcxYjQyMGZjIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoicVptMGlyU1pLR29xTDN0N2dXckpZY1l0VElpZndpeTVVMG4rcFwvdDU3K3NHXC9VeDBueDRjbWFUQytrdE5Sc3pjIn0= x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.600.7 dlp-reaction: no-action x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190417112159.VTDwUNRfUzo2OS0ISoQqcXVHJJcUASpvH_iutwahDaw@z> Hi=20 =20 > > > As you have stated below, it's just the same thing with two different > > views. > > > > > > > I think it would be plausible for both cases: > > > > - one port per core (your case). > > > > - multiple ports per core. > > > > > > Indeed. For this particular patch, I just chose the first one. > > Probably because that is the most general use-case I come across. > > > I am sure the second too has equal number of possible use-cases - but > > probably someone with access to that kind of scenario would be > > > better suited for validating what is the performance increase. > > > Do you think it would be OK to have that in and then sometime in > > future enable the second option? > > > > What I am trying to say - if we'll have mempool per lcore (not per > > port), > > then it would cover both cases above. > > So wouldn't need to make extra changes. > > Konstantin >=20 > What you are suggesting would end up as 1:N mapping of port:pool (when mu= ltiple queues are being used for a port, each affined to > different core).=20 Yes. Probably there is some misunderstanding from my part. >From your previous mail: "This is applicable for cases where separate cores can handle separate ports - each with their pools. (somehow I felt that message in commit was adequate - I can rephrase if that is misleading)" I made a conclusion (probably wrong) that the only config you are interested (one that shows performance improv= ement): when all queues of each port is managed by the same core and each core manages only one port. >From that perspective - it doesn't matter would we have pool per core or pe= r port - we will still end-up with a separate pool per port (and core). But probably that conclusion was wrong. > In my observation, or rather the cases I generally see, that would end up= reducing performance. Especially hardware pools > work best when pool:port are co-located. For generic pools (SW based) having one pool per core should definitely be = faster than multiple ones. For HW based pools - I can't say much, as I don't have such HW to try. =20 > At least for me this option of setting multiple buffer pools against lcor= es in l3fwd is NOT a preferred use-case. Which leads me to conclude > that we would anyways need both way mapping: pool-per-port and pool-per-c= ore, to cover larger number of use-cases (at least, yours and > mine). If my conclusion above was wrong, then yes. Konstantin