From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from emea01-am1-obe.outbound.protection.outlook.com (mail-am1on0061.outbound.protection.outlook.com [157.56.112.61]) by dpdk.org (Postfix) with ESMTP id ECBEC29C6 for ; Thu, 9 Jun 2016 13:49:49 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=c4TE3wLclfqqt2ncxgGMd6017QGg9rE2E0njA2xGJ8A=; b=YwBSnk9R+sJor3ih3m/x4Gkwi5vEr/1td8ZeCRGU7pZZx5oeqWz+OKF+cw3kbumNHlj02P54GOuKh8e2KLEB4ZIaPe4RFJzgz5vaZKC0dPQDVULJuTVM/emrFFtbra3F6ORl3tq5mBc8OmSZR0mBlNlNTbGoO+UwMDY5stSXrHk= Received: from VI1PR0401MB2061.eurprd04.prod.outlook.com (10.166.141.135) by VI1PR0401MB2063.eurprd04.prod.outlook.com (10.166.141.137) with Microsoft SMTP Server (TLS) id 15.1.511.8; Thu, 9 Jun 2016 11:49:48 +0000 Received: from VI1PR0401MB2061.eurprd04.prod.outlook.com ([10.166.141.135]) by VI1PR0401MB2061.eurprd04.prod.outlook.com ([10.166.141.135]) with mapi id 15.01.0511.010; Thu, 9 Jun 2016 11:49:48 +0000 From: Shreyansh Jain To: Jerin Jacob , "Hunt, David" CC: "dev@dpdk.org" , "olivier.matz@6wind.com" , "viktorin@rehivetech.com" Thread-Topic: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations Thread-Index: AQHRwJ6WYtTJj5Q9pk+CUYpxLcIab5/fWYJQgAGK+ACAAA55AIAAE0Sw Date: Thu, 9 Jun 2016 11:49:44 +0000 Deferred-Delivery: Thu, 9 Jun 2016 11:49:34 +0000 Message-ID: References: <1464874043-67467-1-git-send-email-david.hunt@intel.com> <1464965906-108927-1-git-send-email-david.hunt@intel.com> <1464965906-108927-2-git-send-email-david.hunt@intel.com> <5756931C.70805@intel.com> <57593962.6010802@intel.com> <20160609103126.GA27529@localhost.localdomain> In-Reply-To: <20160609103126.GA27529@localhost.localdomain> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=shreyansh.jain@nxp.com; x-originating-ip: [192.88.169.1] x-ms-office365-filtering-correlation-id: 40af2a64-2b10-4473-18d4-08d3905c28f8 x-microsoft-exchange-diagnostics: 1; VI1PR0401MB2063; 5:/YrMwIiu4EFFqOh8MAq49GpA+ujER/euVFYIbdX7tO5hA01hNLQ50WqJV09VwYumLwMzaJsGgjrjgLLSl8HHFq0fe6DRcE7x4ONV24/+ywQpiidtso3jGUhkLwaD9+zaSp2xz7qJRd1BnYP18rGItw==; 24:iErM60UKKtstNdOva1j3bVuYSl6CZviJCyygZsJVok8RUhDSVMETKo0W+VhLm6bdoPENiG1/t1nQt2FKhjW1J0SaL8RH/gvR9BT6c+LVv9k=; 7:a3u7mfCTkvX++IZCkjpspyoPViKlv14b85lfHP+mcxOreuLxWa6fJZIYuc0JtRoDnFBGv6/BG5c1qUuT+BznD8Crz9AWDU0POcMB6qYe7kRM9aoaN36jW62i2BlZaHzMJpd2F/aV5l6PDtkiZyRQANRAdDMg+1pwTWZHlu6RfwV35idXXa3L7Bw1I34QHUm40hEO3dg1hFS0mf1d+UPQDzyIh1bZ/eNBTOdG4Fai7vI= x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:VI1PR0401MB2063; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(278428928389397)(185117386973197)(788757137089)(228905959029699); x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001)(6055026); SRVR:VI1PR0401MB2063; BCL:0; PCL:0; RULEID:; SRVR:VI1PR0401MB2063; x-forefront-prvs: 0968D37274 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(6009001)(24454002)(13464003)(199003)(377454003)(189002)(3660700001)(101416001)(3280700002)(81156014)(102836003)(6116002)(8936002)(106356001)(106116001)(97736004)(105586002)(8676002)(76176999)(86362001)(81166006)(586003)(92566002)(189998001)(122556002)(3846002)(54356999)(76576001)(5003600100002)(5001770100001)(5002640100001)(87936001)(10400500002)(68736007)(33656002)(77096005)(5008740100001)(11100500001)(50986999)(2950100001)(4326007)(9686002)(74316001)(2900100001)(2906002)(19580395003)(5004730100002)(19580405001)(66066001)(93886004); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR0401MB2063; H:VI1PR0401MB2061.eurprd04.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Jun 2016 11:49:48.6235 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2063 Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2016 11:49:50 -0000 Hi Jerin, > -----Original Message----- > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Thursday, June 09, 2016 4:02 PM > To: Hunt, David > Cc: Shreyansh Jain ; dev@dpdk.org; > olivier.matz@6wind.com; viktorin@rehivetech.com > Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool > operations >=20 > On Thu, Jun 09, 2016 at 10:39:46AM +0100, Hunt, David wrote: > > Hi Shreyansh, > > > > On 8/6/2016 2:48 PM, Shreyansh Jain wrote: > > > Hi David, > > > > > > Thanks for explanation. I have some comments inline... > > > > > > > -----Original Message----- > > > > From: Hunt, David [mailto:david.hunt@intel.com] > > > > Sent: Tuesday, June 07, 2016 2:56 PM > > > > To: Shreyansh Jain ; dev@dpdk.org > > > > Cc: olivier.matz@6wind.com; viktorin@rehivetech.com; > > > > jerin.jacob@caviumnetworks.com > > > > Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external > mempool > > > > operations > > > > > > > > Hi Shreyansh, > > > > > > > > On 6/6/2016 3:38 PM, Shreyansh Jain wrote: > > > > > Hi, > > > > > > > > > > (Apologies for overly-eager email sent on this thread earlier. Wi= ll > be more > > > > careful in future). > > > > > This is more of a question/clarification than a comment. (And I h= ave > taken > > > > only some snippets from original mail to keep it cleaner) > > > > > > > > > > > +MEMPOOL_REGISTER_OPS(ops_mp_mc); > > > > > > +MEMPOOL_REGISTER_OPS(ops_sp_sc); > > > > > > +MEMPOOL_REGISTER_OPS(ops_mp_sc); > > > > > > +MEMPOOL_REGISTER_OPS(ops_sp_mc); > > > > > > > > > > > > > > > From the above what I understand is that multiple packet pool > handlers can > > > > be created. > > > > > I have a use-case where application has multiple pools but only t= he > packet > > > > pool is hardware backed. Using the hardware for general buffer > requirements > > > > would prove costly. > > > > > From what I understand from the patch, selection of the pool is > based on > > > > the flags below. > > > > > > > > The flags are only used to select one of the default handlers for > > > > backward compatibility through > > > > the rte_mempool_create call. If you wish to use a mempool handler t= hat > > > > is not one of the > > > > defaults, (i.e. a new hardware handler), you would use the > > > > rte_create_mempool_empty > > > > followed by the rte_mempool_set_ops_byname call. > > > > So, for the external handlers, you create and empty mempool, then s= et > > > > the operations (ops) > > > > for that particular mempool. > > > I am concerned about the existing applications (for example, l3fwd). > > > Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byna= me' > model would require modifications to these applications. > > > Ideally, without any modifications, these applications should be able= to > use packet pools (backed by hardware) and buffer pools (backed by > ring/others) - transparently. > > > > > > If I go by your suggestions, what I understand is, doing the above > without modification to applications would be equivalent to: > > > > > > struct rte_mempool_ops custom_hw_allocator =3D {...} > > > > > > thereafter, in config/common_base: > > > > > > CONFIG_RTE_DEFAULT_MEMPOOL_OPS=3D"custom_hw_allocator" > > > > > > calls to rte_pktmbuf_pool_create would use the new allocator. > > > > Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to > > rte_mempool_create will continue to use the default handlers (ring base= d). > > > But, another problem arises here. > > > > > > There are two distinct paths for allocations of a memory pool: > > > 1. A 'pkt' pool: > > > rte_pktmbuf_pool_create > > > \- rte_mempool_create_empty > > > | \- rte_mempool_set_ops_byname(..ring_mp_mc..) > > > | > > > `- rte_mempool_set_ops_byname > > > (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..) > > > /* Override default 'ring_mp_mc' of > > > * rte_mempool_create */ > > > > > > 2. Through generic mempool create API > > > rte_mempool_create > > > \- rte_mempool_create_empty > > > (passing pktmbuf and pool constructors) > > > I found various instances in example applications where > rte_mempool_create() is being called directly for packet pools - bypassin= g > the more semantically correct call to rte_pktmbuf_* for packet pools. > > > > > > In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able t= o > replace custom handler operations for packet buffer allocations. > > > > > > From a performance point-of-view, Applications should be able to sel= ect > between packet pools and non-packet pools. > > > > This is intended for backward compatibility, and API consistency. Any > > applications that use > > rte_mempool_create directly will continue to use the default mempool > > handlers. If the need > > to use a custeom hander, they will need to be modified to call the newe= r > > API, > > rte_mempool_create_empty and rte_mempool_set_ops_byname. > > > > > > > > > > > > > > > + /* > > > > > > + * Since we have 4 combinations of the SP/SC/MP/MC examine th= e > flags to > > > > > > + * set the correct index into the table of ops structs. > > > > > > + */ > > > > > > + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) > > > > > > + rte_mempool_set_ops_byname(mp, "ring_sp_sc"); > > > > > > + else if (flags & MEMPOOL_F_SP_PUT) > > > > > > + rte_mempool_set_ops_byname(mp, "ring_sp_mc"); > > > > > > + else if (flags & MEMPOOL_F_SC_GET) > > > > > > + rte_mempool_set_ops_byname(mp, "ring_mp_sc"); > > > > > > + else > > > > > > + rte_mempool_set_ops_byname(mp, "ring_mp_mc"); > > > > > > + > > > My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', > which, if specified, would: > > > > > > ... > > > #define MEMPOOL_F_SC_GET 0x0008 > > > #define MEMPOOL_F_PKT_ALLOC 0x0010 > > > ... > > > > > > in rte_mempool_create_empty: > > > ... after checking the other MEMPOOL_F_* flags... > > > > > > if (flags & MEMPOOL_F_PKT_ALLOC) > > > rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS) > > > > > > And removing the redundant call to rte_mempool_set_ops_byname() in > rte_pktmbuf_create_pool(). > > > > > > Thereafter, rte_pktmbuf_pool_create can be changed to: > > > > > > ... > > > mp =3D rte_mempool_create_empty(name, n, elt_size, cache_size, > > > - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); > > > + sizeof(struct rte_pktmbuf_pool_private), socket_id, > > > + MEMPOOL_F_PKT_ALLOC); > > > if (mp =3D=3D NULL) > > > return NULL; > > > > Yes, this would simplify somewhat the creation of a pktmbuf pool, in th= at > it > > replaces > > the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure w= e > > want > > to introduce a third method of creating a mempool to the developers. If= we > > introduced this, we would then have: > > 1. rte_pktmbuf_pool_create() > > 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would > > use the configured custom handler) > > 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set foll= owed > > by a call to rte_mempool_set_ops_byname() (would allow several diffe= rent > > custom > > handlers to be used in one application > > > > Does anyone else have an opinion on this? Oliver, Jerin, Jan? >=20 > As I mentioned earlier, My take is not to create the separate API's for > external mempool handlers.In my view, It's same, just that sepreate > mempool handler through function pointers. >=20 > To keep the backward compatibility, I think we can extend the flags > in rte_mempool_create and have a single API external/internal pool > creation(makes easy for existing applications too, add a just mempool > flag command line argument to existing applications to choose the > mempool handler) May be I am interpreting it wrong, but, are you suggesting a single mempool= handler for all buffer/packet needs of an application (passed as command l= ine argument)? That would be inefficient especially for cases where pool is backed by a ha= rdware. The application wouldn't want its generic buffers to consume hardwa= re resource which would be better used for packets. I was hoping that external mempool handler would help segregate such use-ca= ses and allow applications to tap into accelerated packet processors. Do correct me if I my understanding is wrong. >=20 > Jerin >=20 > > > > Regards, > > Dave. > > > > - Shreyansh