From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from emea01-am1-obe.outbound.protection.outlook.com (mail-am1on0054.outbound.protection.outlook.com [157.56.112.54]) by dpdk.org (Postfix) with ESMTP id B04172B85 for ; Thu, 9 Jun 2016 13:41:52 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=+hKtVbKDFa3PQX1G4PFCxZnBIn9FJ8+T6dL4i4QfOYE=; b=mX2IO+OvfIeMpnYD252xsyUmokGFCnV8Xn1QesAwUXbV4GGJQnLFMdP7ChgaxWZgofo4xxCMTqVswZ/StI5gysaj0KpWsAIqsj29gTtI426TWmP3ZBj0PffcfekFg2kd9VlHzh/GPbQZLoz83fl1Ty+9S5HHHLEHTzgyLcxTG6g= Received: from VI1PR0401MB2061.eurprd04.prod.outlook.com (10.166.141.135) by VI1PR0401MB2062.eurprd04.prod.outlook.com (10.166.141.136) with Microsoft SMTP Server (TLS) id 15.1.511.8; Thu, 9 Jun 2016 11:41:48 +0000 Received: from VI1PR0401MB2061.eurprd04.prod.outlook.com ([10.166.141.135]) by VI1PR0401MB2061.eurprd04.prod.outlook.com ([10.166.141.135]) with mapi id 15.01.0511.010; Thu, 9 Jun 2016 11:41:48 +0000 From: Shreyansh Jain To: "Hunt, David" , "dev@dpdk.org" CC: "olivier.matz@6wind.com" , "viktorin@rehivetech.com" , "jerin.jacob@caviumnetworks.com" Thread-Topic: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations Thread-Index: AQHRwJ6WYtTJj5Q9pk+CUYpxLcIab5/fWYJQgAGK+ACAABmuwA== Date: Thu, 9 Jun 2016 11:41:45 +0000 Deferred-Delivery: Thu, 9 Jun 2016 11:41:06 +0000 Message-ID: References: <1464874043-67467-1-git-send-email-david.hunt@intel.com> <1464965906-108927-1-git-send-email-david.hunt@intel.com> <1464965906-108927-2-git-send-email-david.hunt@intel.com> <5756931C.70805@intel.com> <57593962.6010802@intel.com> In-Reply-To: <57593962.6010802@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=shreyansh.jain@nxp.com; x-originating-ip: [192.88.169.1] x-ms-office365-filtering-correlation-id: 6c761d5b-541f-49b8-b863-08d3905b0add x-microsoft-exchange-diagnostics: 1; VI1PR0401MB2062; 5:9onghao4IpgSOo9lAnBjhfN0y/dpzylwq1sACH7fyBtdxOPocfGIvL8MrmrhSUa/NSerRBYXrsz/aB7BxPIonf5W6puyXX+sGOqJPLwNlHLBE1YGd0kbRkvZpdpk8t3blAsLNbCqu9aKautGvNgCfA==; 24:yK9OJOXiVVCRQUPrMt8BKhgTHCyk4YWl/h2PAwPHgG+hOrQJnnoELmiysNQRXWGKF9y8x3BTlxkVTuRo9m95g6/BL6Yzj5JfeHYGglbKEJ8=; 7:VS9zbUck73YfsQhfE/FAlCXrfT96ntqGx/33ukT/bzZDc5ciKTHpUP5nvCFzsZE7VxnRCZbxJ31HxAoFUxIsGyiXN4gCgbla6JKOgOIsWL/RTF/J0KVpTwCNVk5iAYfyUd78baRozQ4AVkWoiE1WYcOYc/t2fEkMr1WyC89/XJtKzD3qOY4+uvhSMwsFY4i2i9nVcnkBIOaEgMVYnCmBQXhp6VCTafxoMacAkYBgEoc= x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:VI1PR0401MB2062; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(278428928389397)(185117386973197)(788757137089)(228905959029699); x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001)(6055026); SRVR:VI1PR0401MB2062; BCL:0; PCL:0; RULEID:; SRVR:VI1PR0401MB2062; x-forefront-prvs: 0968D37274 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(6009001)(13464003)(189002)(24454002)(377454003)(199003)(10400500002)(106356001)(189998001)(106116001)(86362001)(4326007)(15975445007)(87936001)(93886004)(76576001)(8676002)(2950100001)(2501003)(77096005)(2900100001)(8936002)(105586002)(11100500001)(5001770100001)(9686002)(19580405001)(3846002)(102836003)(81166006)(81156014)(5002640100001)(5003600100002)(6116002)(5008740100001)(5004730100002)(122556002)(33656002)(586003)(101416001)(19580395003)(50986999)(2906002)(97736004)(3660700001)(54356999)(92566002)(76176999)(66066001)(68736007)(74316001)(3280700002); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR0401MB2062; H:VI1PR0401MB2061.eurprd04.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Jun 2016 11:41:48.5598 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2062 Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2016 11:41:56 -0000 Hi David, > -----Original Message----- > From: Hunt, David [mailto:david.hunt@intel.com] > Sent: Thursday, June 09, 2016 3:10 PM > To: Shreyansh Jain ; dev@dpdk.org > Cc: olivier.matz@6wind.com; viktorin@rehivetech.com; > jerin.jacob@caviumnetworks.com > Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool > operations >=20 > Hi Shreyansh, >=20 > On 8/6/2016 2:48 PM, Shreyansh Jain wrote: > > Hi David, > > > > Thanks for explanation. I have some comments inline... > > > >> -----Original Message----- > >> From: Hunt, David [mailto:david.hunt@intel.com] > >> Sent: Tuesday, June 07, 2016 2:56 PM > >> To: Shreyansh Jain ; dev@dpdk.org > >> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com; > >> jerin.jacob@caviumnetworks.com > >> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempo= ol > >> operations > >> > >> Hi Shreyansh, > >> > >> On 6/6/2016 3:38 PM, Shreyansh Jain wrote: > >>> Hi, > >>> > >>> (Apologies for overly-eager email sent on this thread earlier. Will b= e > more > >> careful in future). > >>> This is more of a question/clarification than a comment. (And I have > taken > >> only some snippets from original mail to keep it cleaner) > >>> > >>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc); > >>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc); > >>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc); > >>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc); > >>> > >>> > >>> From the above what I understand is that multiple packet pool handl= ers > can > >> be created. > >>> I have a use-case where application has multiple pools but only the > packet > >> pool is hardware backed. Using the hardware for general buffer > requirements > >> would prove costly. > >>> From what I understand from the patch, selection of the pool is bas= ed > on > >> the flags below. > >> > >> The flags are only used to select one of the default handlers for > >> backward compatibility through > >> the rte_mempool_create call. If you wish to use a mempool handler that > >> is not one of the > >> defaults, (i.e. a new hardware handler), you would use the > >> rte_create_mempool_empty > >> followed by the rte_mempool_set_ops_byname call. > >> So, for the external handlers, you create and empty mempool, then set > >> the operations (ops) > >> for that particular mempool. > > I am concerned about the existing applications (for example, l3fwd). > > Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname= ' > model would require modifications to these applications. > > Ideally, without any modifications, these applications should be able t= o > use packet pools (backed by hardware) and buffer pools (backed by > ring/others) - transparently. > > > > If I go by your suggestions, what I understand is, doing the above with= out > modification to applications would be equivalent to: > > > > struct rte_mempool_ops custom_hw_allocator =3D {...} > > > > thereafter, in config/common_base: > > > > CONFIG_RTE_DEFAULT_MEMPOOL_OPS=3D"custom_hw_allocator" > > > > calls to rte_pktmbuf_pool_create would use the new allocator. >=20 > Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to > rte_mempool_create will continue to use the default handlers (ring based)= . Agree with you. But, some applications continue to use rte_mempool_create for allocating pa= cket pools. Thus, even with a custom handler available (which, most probabl= y, would be a hardware packet buffer handler), application would unintentio= nally end up not using it. Probably, such applications should be changed? (e.g. pipeline).=20 > > But, another problem arises here. > > > > There are two distinct paths for allocations of a memory pool: > > 1. A 'pkt' pool: > > rte_pktmbuf_pool_create > > \- rte_mempool_create_empty > > | \- rte_mempool_set_ops_byname(..ring_mp_mc..) > > | > > `- rte_mempool_set_ops_byname > > (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..) > > /* Override default 'ring_mp_mc' of > > * rte_mempool_create */ > > > > 2. Through generic mempool create API > > rte_mempool_create > > \- rte_mempool_create_empty > > (passing pktmbuf and pool constructors) > > > > I found various instances in example applications where > rte_mempool_create() is being called directly for packet pools - bypassin= g > the more semantically correct call to rte_pktmbuf_* for packet pools. > > > > In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to > replace custom handler operations for packet buffer allocations. > > > > From a performance point-of-view, Applications should be able to selec= t > between packet pools and non-packet pools. >=20 > This is intended for backward compatibility, and API consistency. Any > applications that use > rte_mempool_create directly will continue to use the default mempool > handlers. If the need > to use a custeom hander, they will need to be modified to call the newer > API, > rte_mempool_create_empty and rte_mempool_set_ops_byname. My understanding was that applications should be oblivious of how their poo= ls are managed, except that they do understand packet pools should be faste= r (or accelerated) than non-packet pools. (Of course, some applications may be designed to explicitly take advantage = of an available handler through rte_mempool_create_empty=3D>rte_mempool_set= _ops_byname calls.) In that perspective, I was expecting that applications should be calling: -> rte_pktmbuf_* for all packet relation operations -> rte_mempool_* for non-packet or explicit hardware handlers And leave rest of the mempool handler related magic to DPDK framework. >=20 >=20 > >>> > >>>> + /* > >>>> + * Since we have 4 combinations of the SP/SC/MP/MC examine the > flags to > >>>> + * set the correct index into the table of ops structs. > >>>> + */ > >>>> + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) > >>>> + rte_mempool_set_ops_byname(mp, "ring_sp_sc"); > >>>> + else if (flags & MEMPOOL_F_SP_PUT) > >>>> + rte_mempool_set_ops_byname(mp, "ring_sp_mc"); > >>>> + else if (flags & MEMPOOL_F_SC_GET) > >>>> + rte_mempool_set_ops_byname(mp, "ring_mp_sc"); > >>>> + else > >>>> + rte_mempool_set_ops_byname(mp, "ring_mp_mc"); > >>>> + > > My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', whi= ch, > if specified, would: I read through some previous discussions and realized that something simila= r [1] had already been proposed earlier. I didn't want to hijack this thread with an old discussions - it was uninte= ntional. [1] http://article.gmane.org/gmane.comp.networking.dpdk.devel/39803 But, [1] would make the distinction of *type* of pool and its corresponding= handler, whether default or external/custom, quite clear. > > > > ... > > #define MEMPOOL_F_SC_GET 0x0008 > > #define MEMPOOL_F_PKT_ALLOC 0x0010 > > ... > > > > in rte_mempool_create_empty: > > ... after checking the other MEMPOOL_F_* flags... > > > > if (flags & MEMPOOL_F_PKT_ALLOC) > > rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS) > > > > And removing the redundant call to rte_mempool_set_ops_byname() in > rte_pktmbuf_create_pool(). > > > > Thereafter, rte_pktmbuf_pool_create can be changed to: > > > > ... > > mp =3D rte_mempool_create_empty(name, n, elt_size, cache_size, > > - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); > > + sizeof(struct rte_pktmbuf_pool_private), socket_id, > > + MEMPOOL_F_PKT_ALLOC); > > if (mp =3D=3D NULL) > > return NULL; >=20 > Yes, this would simplify somewhat the creation of a pktmbuf pool, in > that it replaces > the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we > want > to introduce a third method of creating a mempool to the developers. If w= e > introduced this, we would then have: > 1. rte_pktmbuf_pool_create() > 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would > use the configured custom handler) > 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set follow= ed > by a call to rte_mempool_set_ops_byname() (would allow several > different custom > handlers to be used in one application >=20 > Does anyone else have an opinion on this? Oliver, Jerin, Jan? >=20 > Regards, > Dave. >=20 - Shreyansh