From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 17033A00E6 for ; Tue, 16 Apr 2019 18:00:52 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id ADB821B4F5; Tue, 16 Apr 2019 18:00:50 +0200 (CEST) Received: from EUR02-AM5-obe.outbound.protection.outlook.com (mail-eopbgr00057.outbound.protection.outlook.com [40.107.0.57]) by dpdk.org (Postfix) with ESMTP id 1E82A1B4CD for ; Tue, 16 Apr 2019 18:00:50 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Bhn9ipggKs0Saoq1mb0Krod0B/3saZUi72T7QcT5bIc=; b=j3ahAvEFMI7T/s24UahkudreaoWXq7z2u0G5Kvf4K5ErXIh7JG9FaXxlnca201SIPiCDNGhARs44myHa8KgweHMw9K9Utl4J661Bbpk2JX2u9A5B3L5U1xabc4XLmdC5F5QTINyVcQrf5vL/BISjflPZGJbSAYn/Qfg3HCkJKOo= Received: from VI1PR04MB4688.eurprd04.prod.outlook.com (20.177.56.80) by VI1PR04MB5837.eurprd04.prod.outlook.com (20.178.204.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1792.14; Tue, 16 Apr 2019 16:00:48 +0000 Received: from VI1PR04MB4688.eurprd04.prod.outlook.com ([fe80::48ee:dfc2:13c2:2f96]) by VI1PR04MB4688.eurprd04.prod.outlook.com ([fe80::48ee:dfc2:13c2:2f96%5]) with mapi id 15.20.1792.018; Tue, 16 Apr 2019 16:00:48 +0000 From: Shreyansh Jain To: "Ananyev, Konstantin" , "Ruifeng Wang (Arm Technology China)" , "dev@dpdk.org" CC: nd Thread-Topic: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port Thread-Index: AdT0a2FDUtx+mttVRvqz6GetRBUAiQ== Date: Tue, 16 Apr 2019 16:00:44 +0000 Deferred-Delivery: Tue, 16 Apr 2019 16:00:07 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=shreyansh.jain@nxp.com; x-originating-ip: [182.69.205.202] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 02d6a63e-a3a7-43ca-f3e3-08d6c284b12c x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600140)(711020)(4605104)(4618075)(2017052603328)(7193020); SRVR:VI1PR04MB5837; x-ms-traffictypediagnostic: VI1PR04MB5837: x-microsoft-antispam-prvs: x-forefront-prvs: 000947967F x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(376002)(396003)(346002)(366004)(39860400002)(136003)(199004)(189003)(55016002)(106356001)(9686003)(53936002)(105586002)(2501003)(33656002)(6116002)(66066001)(3846002)(68736007)(44832011)(110136005)(316002)(486006)(5660300002)(476003)(97736004)(71190400001)(71200400001)(6666004)(52536014)(229853002)(99286004)(81166006)(81156014)(14444005)(8936002)(102836004)(256004)(7696005)(4326008)(26005)(6506007)(7736002)(6246003)(86362001)(14454004)(478600001)(186003)(2906002)(74316002)(305945005)(25786009)(6436002)(78486014); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR04MB5837; H:VI1PR04MB4688.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: Y4/OOoDFdVUJCowm6gXwNte7wR1iGCle83TgKUVKx8TOoAH5zzXpBivIBXBaI3fNYFA3LUZkGjJkZjUC/VAvLKeLWjAMcMtqTyjy1LlEhIWM9LhTm2i0j7On7a/w5eOU1FrZInE8nsjZfozNkVGDXLLcq9KWWUgLZekF6YSf2WKfnSBtkJlGPxAIFXE2Uf5mMa4JwXpUSuybTF6uhFyeUnElMUopgN8DNXntoaKiaAl/fdZbqig4Pn4LR0L5FORFCOksh/Kj6V5K1ZGQx1W7KaUwfYrNX+J/jQuPesOlZs0jtv6NKsQKm78/jbm8AsD3NIf/pqKE73KifHqFECY8Tbaj9RRTaLvlaYRoAwN5x9LbsgcvwJyXh8C5K5m8mkueZ65MA7aCv4O5AdkUII6QoJSEWqBj600ShPDVw3D5epU= Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 02d6a63e-a3a7-43ca-f3e3-08d6c284b12c X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Apr 2019 16:00:48.2538 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5837 Subject: Re: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190416160044.mtykgxKwLznhwzrNgnLb1xLSCxyw74TXpVjdTqCmrcg@z> Hi Ananyev, [...] > > As you have stated below, it's just the same thing with two different > views. > > > > > I think it would be plausible for both cases: > > > - one port per core (your case). > > > - multiple ports per core. > > > > Indeed. For this particular patch, I just chose the first one. > Probably because that is the most general use-case I come across. > > I am sure the second too has equal number of possible use-cases - but > probably someone with access to that kind of scenario would be > > better suited for validating what is the performance increase. > > Do you think it would be OK to have that in and then sometime in > future enable the second option? >=20 > What I am trying to say - if we'll have mempool per lcore (not per > port), > then it would cover both cases above. > So wouldn't need to make extra changes. > Konstantin What you are suggesting would end up as 1:N mapping of port:pool (when mult= iple queues are being used for a port, each affined to different core). In = my observation, or rather the cases I generally see, that would end up redu= cing performance. Especially hardware pools work best when pool:port are co= -located. At least for me this option of setting multiple buffer pools against lcores= in l3fwd is NOT a preferred use-case. Which leads me to conclude that we w= ould anyways need both way mapping: pool-per-port and pool-per-core, to cov= er larger number of use-cases (at least, yours and mine). Regards, Shreyansh