From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from emea01-am1-obe.outbound.protection.outlook.com (mail-am1on0090.outbound.protection.outlook.com [157.56.112.90]) by dpdk.org (Postfix) with ESMTP id BA4552617 for ; Thu, 9 Jun 2016 15:03:50 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=LHOMnxjfIgJu0NGre2bFGgy++6E9oVUFYObDSeVKDoM=; b=Rw5NhFodoWh2Eydc9vXt1l+ggDyzC3cID2u2BBTdmNYGy+lZPZzgBmK7b7eUl3OexhZsLEx1qU1SXy/uh31LdvrUrTLTFIvpIiXOZE5G+P4/LZfiCAlf+N+TpHKC9MU7gScXlEq2wHLF/3CvhhChRUdK9otrbBleV76CkX8Nswg= Received: from VI1PR0401MB2061.eurprd04.prod.outlook.com (10.166.141.135) by VI1PR0401MB2061.eurprd04.prod.outlook.com (10.166.141.135) with Microsoft SMTP Server (TLS) id 15.1.511.8; Thu, 9 Jun 2016 13:03:49 +0000 Received: from VI1PR0401MB2061.eurprd04.prod.outlook.com ([10.166.141.135]) by VI1PR0401MB2061.eurprd04.prod.outlook.com ([10.166.141.135]) with mapi id 15.01.0511.010; Thu, 9 Jun 2016 13:03:49 +0000 From: Shreyansh Jain To: Jerin Jacob CC: "Hunt, David" , "dev@dpdk.org" , "olivier.matz@6wind.com" , "viktorin@rehivetech.com" Thread-Topic: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations Thread-Index: AQHRwJ6WYtTJj5Q9pk+CUYpxLcIab5/fWYJQgAGK+ACAAA55AIAAE0SwgAAOBICAAAUV0A== Date: Thu, 9 Jun 2016 13:03:40 +0000 Deferred-Delivery: Thu, 9 Jun 2016 13:03:00 +0000 Message-ID: References: <1464874043-67467-1-git-send-email-david.hunt@intel.com> <1464965906-108927-1-git-send-email-david.hunt@intel.com> <1464965906-108927-2-git-send-email-david.hunt@intel.com> <5756931C.70805@intel.com> <57593962.6010802@intel.com> <20160609103126.GA27529@localhost.localdomain> <20160609123034.GA30968@localhost.localdomain> In-Reply-To: <20160609123034.GA30968@localhost.localdomain> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=shreyansh.jain@nxp.com; x-originating-ip: [192.88.169.1] x-ms-office365-filtering-correlation-id: 82d620c1-8edd-4322-191a-08d390667ff2 x-microsoft-exchange-diagnostics: 1; VI1PR0401MB2061; 5:LTqFBVZ9++NYtYIpOVmFhTwrdSwyc20LIsZQ7zqRvKJjHFt0lePmkPo84uWuQEiZY1XdaajgVLINwEe96Lg73t3M43Z4C2fZwuKA78Pjngw0vpAN0rs4bMvoGTf9F/sHsTnj9xOTYDoewIh0ESyYVQ==; 24:r9dg6Z385/J7ji9MnQd4xu62rtV0YDD5h48xIdajepl4zZ9e1tPGDwDMFmVHyw3K7KK2k57TD14fyuVYrqTdVx6J63tQrc4bqmNY3A/M6fM=; 7:8oXR7ThUeikRLHF18V5LRsHeGzHSXfWfi6WTb45b7utepWWMujfJnNO+6jwH29wUQZmZzTYmN4vngGKT5qdG3/4U8cLVqKlMUSlV2tyhg2l1kSzke3bm9nhrnH8KM/VmJICUzHqHGmbMxyCDIHRvJB5pNPvV4+xnbo0ZSaEi9G0JU/eRS4UogfipkPjqkuGZSc6XdSd373dvdhBT+vsuYwFDqSVzBpljCFy+bFJq6Wk= x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:VI1PR0401MB2061; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(278428928389397)(185117386973197)(228905959029699); x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(601004)(2401047)(8121501046)(5005006)(3002001)(10201501046)(6055026); SRVR:VI1PR0401MB2061; BCL:0; PCL:0; RULEID:; SRVR:VI1PR0401MB2061; x-forefront-prvs: 0968D37274 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(6009001)(199003)(24454002)(13464003)(189002)(377454003)(19580395003)(2900100001)(92566002)(33656002)(9686002)(102836003)(8936002)(586003)(122556002)(3846002)(6116002)(5008740100001)(2950100001)(19580405001)(66066001)(3660700001)(106116001)(81156014)(77096005)(110136002)(105586002)(5004730100002)(4326007)(76576001)(10400500002)(11100500001)(74316001)(3280700002)(106356001)(54356999)(2906002)(68736007)(97736004)(189998001)(87936001)(101416001)(81166006)(76176999)(5003600100002)(86362001)(93886004)(5002640100001)(8676002)(50986999); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR0401MB2061; H:VI1PR0401MB2061.eurprd04.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Jun 2016 13:03:49.4664 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2061 Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2016 13:03:51 -0000 Hi Jerin, > -----Original Message----- > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Thursday, June 09, 2016 6:01 PM > To: Shreyansh Jain > Cc: Hunt, David ; dev@dpdk.org; olivier.matz@6wind.= com; > viktorin@rehivetech.com > Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool > operations >=20 > On Thu, Jun 09, 2016 at 11:49:44AM +0000, Shreyansh Jain wrote: > > Hi Jerin, >=20 > Hi Shreyansh, >=20 > > > > > > Yes, this would simplify somewhat the creation of a pktmbuf pool, i= n > that > > > it > > > > replaces > > > > the rte_mempool_set_ops_byname with a flag bit. However, I'm not su= re > we > > > > want > > > > to introduce a third method of creating a mempool to the developers= . If > we > > > > introduced this, we would then have: > > > > 1. rte_pktmbuf_pool_create() > > > > 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which w= ould > > > > use the configured custom handler) > > > > 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set > followed > > > > by a call to rte_mempool_set_ops_byname() (would allow several > different > > > > custom > > > > handlers to be used in one application > > > > > > > > Does anyone else have an opinion on this? Oliver, Jerin, Jan? > > > > > > As I mentioned earlier, My take is not to create the separate API's f= or > > > external mempool handlers.In my view, It's same, just that sepreate > > > mempool handler through function pointers. > > > > > > To keep the backward compatibility, I think we can extend the flags > > > in rte_mempool_create and have a single API external/internal pool > > > creation(makes easy for existing applications too, add a just mempool > > > flag command line argument to existing applications to choose the > > > mempool handler) > > > > May be I am interpreting it wrong, but, are you suggesting a single mem= pool > handler for all buffer/packet needs of an application (passed as command = line > argument)? > > That would be inefficient especially for cases where pool is backed by = a > hardware. The application wouldn't want its generic buffers to consume > hardware resource which would be better used for packets. >=20 > It may vary from platform to platform or particular use case. For instanc= e, > the HW external pool manager for generic buffers may scale better than SW > multi > producers/multi-consumer implementation when the number of cores > N > (as no locking involved in enqueue/dequeue(again it is depended on > specific HW implementation)) I agree with you that above cases would exist. But, even in these cases I think it would be application's prerogative to d= ecide whether it would like its buffers to be managed by a hardware allocat= or or SW [SM]p/[SM]c implementations. Probably, in this case the applicatio= n would call the rte_mempool_*(PKT_POOL) for generic buffers as well (or ma= ybe a dedicated buffer pool flag) - just as an example. >=20 > I thought their no harm in selecting the external pool handlers > in root level itself(rte_mempool_create) as by default it is > SW MP/MC and it just an option to override if the application wants it. It sounds fine if calls to rte_mempool_* can select an external handler *op= tionally* - but, if we pass it as command line, it would be binding (at lea= st, semantically) for rte_pktmbuf_* calls as well. Isn't it? [Probably, I am still unclear how it would remain 'optional' in command lin= e case you suggested.] >=20 > Jerin >=20 >=20 [...] - Shreyansh