From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 77F7BA00E6 for ; Mon, 15 Apr 2019 14:05:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6FB8C1B1E3; Mon, 15 Apr 2019 14:05:31 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 470051B108 for ; Mon, 15 Apr 2019 14:05:30 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Apr 2019 05:05:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,353,1549958400"; d="scan'208";a="134446465" Received: from irsmsx104.ger.corp.intel.com ([163.33.3.159]) by orsmga008.jf.intel.com with ESMTP; 15 Apr 2019 05:05:26 -0700 Received: from irsmsx156.ger.corp.intel.com (10.108.20.68) by IRSMSX104.ger.corp.intel.com (163.33.3.159) with Microsoft SMTP Server (TLS) id 14.3.408.0; Mon, 15 Apr 2019 13:05:20 +0100 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.31]) by IRSMSX156.ger.corp.intel.com ([169.254.3.53]) with mapi id 14.03.0415.000; Mon, 15 Apr 2019 13:05:21 +0100 From: "Ananyev, Konstantin" To: Shreyansh Jain , "Ruifeng Wang (Arm Technology China)" , "dev@dpdk.org" CC: nd Thread-Topic: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port Thread-Index: AQHUo1eqaIUoRjuicUqwMQcvs709jaYyVziwgAA8NpCABjhFgIAE8wOA Date: Mon, 15 Apr 2019 12:05:20 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772580148A97E0F@irsmsx105.ger.corp.intel.com> References: <20190103112932.4415-1-shreyansh.jain@nxp.com> <2601191342CEEE43887BDE71AB9772580148A942C6@irsmsx105.ger.corp.intel.com> In-Reply-To: Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZTE4ZGZhYjAtMmFlNS00Yzc4LWI3NmQtNzA3NWVmM2Y5YTM1IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiS1I4eWU0NStuYnFzU3RQeWFYVGpPNHRwVlNORXRiWU5aUlBLVGhIY1RqVG1XR2FHcW5JNStkT3NBakljYjhcL3EifQ== x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.600.7 dlp-reaction: no-action x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190415120520.E5oOllRAdsnRFyKQ3nMVjuomuzlmjELX1gFZz3R4Lv8@z> Hi Shreyansh, > > > I tried this patch on MacchiatoBin + 82599 NIC. > > > Compared with global-pool mode, per-port-pool mode showed slightly > > lower performance in single core test. > > > > That was my thought too - for the case when queues from multiple ports > > are handled by the same core > > it probably would only slowdown things. >=20 > Thanks for your comments. >=20 > This is applicable for cases where separate cores can handle separate por= ts - each with their pools. (somehow I felt that message in commit > was adequate - I can rephrase if that is misleading) >=20 > In case there is enough number of cores available for datapath, such segr= egation can result in better performance - possibly because of > drop in pool and cache conflicts. > At least on some of NXP SoC, this resulted in over 15% improvement. > And, in other cases it didn't lead to any drop/negative-impact. If each core manages just one port, then yes definitely performance increas= e is expected. If that's the case you'd like enable, then can I suggest to have mempool pe= r lcore not per port? I think it would be plausible for both cases: - one port per core (your case). - multiple ports per core. =20 Konstantin >=20 > > Wonder what is the use case for the patch and what is the performance > > gain you observed? >=20 > For hardware backed pools, hardware access and exclusion are expensive. B= y segregating pool/port/lcores it is possible to attain a conflict > free path. This is the use-case this patch targets. > And anyways, this is an optional feature. >=20 > > Konstantin > > > > > In dual core test, both modes had nearly same performance. >=20 > OK >=20 > > > > > > My setup only has two ports which is limited. > > > Just want to know the per-port-pool mode has more performance gain > > when many ports are bound to different cores? >=20 > Yes, though not necessarily *many* - in my case, I had 4 ports and even t= hen about ~10% improvement was directly visible. I increased the > port count and I was able to touch about ~15%. I did pin each port to a s= eparate core, though. > But again, important point is that without this feature enabled, I didn't= see any drop in performance. Did you observe any drop? >=20 > > > > > > Used commands: > > > sudo ./examples/l3fwd/build/l3fwd -c 0x4 -w 0000:01:00.0 -w > > 0000:01:00.1 -- -P -p 3 --config=3D'(0,0,2),(1,0,2)' --per-port-pool > > > sudo ./examples/l3fwd/build/l3fwd -c 0xc -w 0000:01:00.0 -w > > 0000:01:00.1 -- -P -p 3 --config=3D'(0,0,2),(1,0,3)' --per-port-pool > > > > > > Regards, > > > /Ruifeng > > > >=20 > [...]