From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR01-VE1-obe.outbound.protection.outlook.com (mail-ve1eur01on0088.outbound.protection.outlook.com [104.47.1.88]) by dpdk.org (Postfix) with ESMTP id AC5358D89 for ; Wed, 5 Oct 2016 13:49:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=XnLXAuX91weyUUJ/f274JQ5F0F9C9g25KJRS5yWUMOo=; b=t4Y3qeAHJkdIG0Yb/j6NsG7Y8HJ9l2JH0BAg9K9e+s3lStjD738rkbzsLdhoQlz22AQijCLI1VjurQfD5/STRtYH3wcwZVwc+Ab1JouP8DLxNaGmKaKK2aEh0hqoM8B8WOfufOp19uyJ3xZRtxVKGjFRfBrf6NRHHlb1KIhm+18= Received: from DB5PR04MB1605.eurprd04.prod.outlook.com (10.164.38.147) by DB5PR04MB1608.eurprd04.prod.outlook.com (10.164.38.150) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.659.8; Wed, 5 Oct 2016 11:49:15 +0000 Received: from DB5PR04MB1605.eurprd04.prod.outlook.com ([10.164.38.147]) by DB5PR04MB1605.eurprd04.prod.outlook.com ([10.164.38.147]) with mapi id 15.01.0659.011; Wed, 5 Oct 2016 11:49:15 +0000 From: Hemant Agrawal To: "Hunt, David" , Olivier Matz , "dev@dpdk.org" CC: "jerin.jacob@caviumnetworks.com" Thread-Topic: [RFC 0/7] changing mbuf pool handler Thread-Index: AQHSHY3KKEfVJmxvsEulu7k45EKRI6CZnlGAgAAc3aA= Date: Wed, 5 Oct 2016 11:49:15 +0000 Message-ID: References: <1474292567-21912-1-git-send-email-olivier.matz@6wind.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=hemant.agrawal@nxp.com; x-originating-ip: [192.88.169.1] x-ms-office365-filtering-correlation-id: 11c44353-3858-420e-0f73-08d3ed15a1cc x-microsoft-exchange-diagnostics: 1; DB5PR04MB1608; 6:SH2N33L8gaf9/mcttvB8RWktzZKlPIigLuiaRM5KwNvnsNvUR63VUzusip1pSzg4tSg8NqtXTlx+CEYjOR/n5KNam+cXQmg1ecFEPvGkqB3eGxAJL+jh9NTmYTHKpNIhDF8iPjv7u9RMtnamFxZVH8tUcuurZzgtYVp6ZQy/k85i2xYwF7rOr/c5Jq+bgyJS8nYrrjUC9AxTxramEQjhdphfNm8WB1gvqu02tmSAmG44sv4r6UOna1dN0rcHqsfTh60OMs+BP30MBKI115GKMr/qzieYVUuXTuFJaPk49v/rzRQEOA81HrBop5nr9nL+9xlVHki9qlB1+Zng7WyRcRcggDVRlwFHZmSECwVinuw=; 5:9+TTMPgoBqXvzz1od8i0tytfUiqY8KSbDe1ovVVYhQ5NQWxupRcLMqLGIFhROxhy/w51i2m7baKUnhakdKC4V1aBXWTixp5V8Lg+A3ckumrM6/4Gk3+iKWlp0QodzusPztWF7pYZjaMnk9+AY/ChVQ==; 24:W+fsek85t6+AXmWB+Yn9hgLwUqofmvs6Z37niRuibY0DNxyQjDsV2gRMLFWo2KHTTWryBuWXiInpBfDHI20ScWzfOiI1kz3vBQXGfXkrFFE=; 7:Z7Uv5tEjAsWFVgcm85rVJ//0nqHYwAqPpnf+xlJ38yyMyIjmnEVLsT0o+VcpSwpLwWekINlLozIh/Mce7nvr0KOvcbFa850+kDiPL3VKI0AiGOaDHoo3KyUW3MIW/UARWhC2xbfpFJ5pX/mQpqU3jcHRqZ9wckyGB1KxGRfQ3aK6U9S+uM8SQnGjC1WYHJgy/+faDu7L1Cc6H38ywHu39xGFTQXWUgOf1DHYz8AhclmviNgSARogywe69w7iHeRwC6kJbKpj3xeJt/+NbudMPXztBpbGacQBpgDWJfVGRrtmTDlQASdd9H075usCG1Ip x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:DB5PR04MB1608; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(278428928389397)(228905959029699); x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(6040176)(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001)(6055026); SRVR:DB5PR04MB1608; BCL:0; PCL:0; RULEID:; SRVR:DB5PR04MB1608; x-forefront-prvs: 008663486A x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(6009001)(7916002)(199003)(189002)(13464003)(377454003)(24454002)(305945005)(74316002)(2906002)(6116002)(4326007)(3846002)(102836003)(86362001)(76576001)(105586002)(7696004)(7846002)(2950100002)(586003)(11100500001)(5001770100001)(97736004)(189998001)(19580405001)(77096005)(81156014)(8676002)(19580395003)(87936001)(81166006)(10400500002)(2900100001)(76176999)(122556002)(5002640100001)(66066001)(101416001)(50986999)(561944003)(92566002)(5660300001)(345774005)(33656002)(3660700001)(2501003)(8936002)(93886004)(68736007)(7736002)(3280700002)(106356001)(106116001)(9686002)(54356999); DIR:OUT; SFP:1101; SCL:1; SRVR:DB5PR04MB1608; H:DB5PR04MB1605.eurprd04.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Oct 2016 11:49:15.1781 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB5PR04MB1608 Subject: Re: [dpdk-dev] [RFC 0/7] changing mbuf pool handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Oct 2016 11:49:17 -0000 Hi Olivier, > -----Original Message----- > From: Hunt, David [mailto:david.hunt@intel.com] > Hi Olivier, >=20 >=20 > On 3/10/2016 4:49 PM, Olivier Matz wrote: > > Hi Hemant, > > > > Thank you for your feedback. > > > > On 09/22/2016 01:52 PM, Hemant Agrawal wrote: > >> Hi Olivier > >> > >> On 9/19/2016 7:12 PM, Olivier Matz wrote: > >>> Hello, > >>> > >>> Following discussion from [1] ("usages issue with external mempool"). > >>> > >>> This is a tentative to make the mempool_ops feature introduced by > >>> David Hunt [2] more widely used by applications. > >>> > >>> It applies on top of a minor fix in mbuf lib [3]. > >>> > >>> To sumarize the needs (please comment if I did not got it properly): > >>> > >>> - new hw-assisted mempool handlers will soon be introduced > >>> - to make use of it, the new mempool API [4] > (rte_mempool_create_empty, > >>> rte_mempool_populate, ...) has to be used > >>> - the legacy mempool API (rte_mempool_create) does not allow to > change > >>> the mempool ops. The default is "ring_p_c" depending on > >>> flags. > >>> - the mbuf helper (rte_pktmbuf_pool_create) does not allow to change > >>> them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS > >>> ("ring_mp_mc") > >>> - today, most (if not all) applications and examples use either > >>> rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf > >>> pool, making it difficult to take advantage of this feature with > >>> existing apps. > >>> > >>> My initial idea was to deprecate both rte_pktmbuf_pool_create() and > >>> rte_mempool_create(), forcing the applications to use the new API, > >>> which is more flexible. But after digging a bit, it appeared that > >>> rte_mempool_create() is widely used, and not only for mbufs. > >>> Deprecating it would have a big impact on applications, and > >>> replacing it with the new API would be overkill in many use-cases. > >> I agree with the proposal. > >> > >>> So I finally tried the following approach (inspired from a > >>> suggestion Jerin [5]): > >>> > >>> - add a new mempool_ops parameter to rte_pktmbuf_pool_create(). > This > >>> unfortunatelly breaks the API, but I implemented an ABI compat lay= er. > >>> If the patch is accepted, we could discuss how to announce/schedul= e > >>> the API change. > >>> - update the applications and documentation to prefer > >>> rte_pktmbuf_pool_create() as much as possible > >>> - update most used examples (testpmd, l2fwd, l3fwd) to add a new > command > >>> line argument to select the mempool handler > >>> > >>> I hope the external applications would then switch to > >>> rte_pktmbuf_pool_create(), since it supports most of the use-cases > >>> (even priv_size !=3D 0, since we can call rte_mempool_obj_iter() afte= r) . > >>> > >> I will still prefer if you can add the "rte_mempool_obj_cb_t *obj_cb, > >> void *obj_cb_arg" into "rte_pktmbuf_pool_create". This single > >> consolidated wrapper will almost make it certain that applications > >> will not try to use rte_mempool_create for packet buffers. > > The patch changes the example applications. I'm not sure I understand > > why adding these arguments would force application to not use > > rte_mempool_create() for packet buffers. Do you have a application in > mind? > > > > For the mempool_ops parameter, we must pass it at init because we need > > to know the mempool handler before populating the pool. For object > > initialization, it can be done after, so I thought it was better to > > reduce the number of arguments to avoid to fall in the > > mempool_create() syndrom :) >=20 > I also agree with the proposal. Looks cleaner. >=20 > I would lean to the side of keeping the parameters to the minimum, i.e. > not adding *obj_cb and *obj_cb_arg into rte_pktmbuf_pool_create. > Developers always have the option of going with rte_mempool_create if the= y > need more fine-grained control. [Hemant] The implementations with hw offloaded mempools don't want develope= r using *rte_mempool_create* for packet buffer pools.=20 This API does not work for hw offloaded mempool.=20 Also, *rte_mempool_create_empty* - may not be convenient for many applicati= on, as it requires calling 4+ APIs. Olivier is not in favor of deprecating the *rte_mempool_create*. I agree = with concerns raised by him.=20 Essentially, I was suggesting to upgrade * rte_pktmbuf_pool_create* to be l= ike *rte_mempool_create* for packet buffers exclusively. This will provide a clear segregation for API usages w.r.t the packet buffe= r pool vs all other type of mempools.=20 Regards, Hemant >=20 > Regards, > Dave. >=20 > > Any other opinions? > > > > Regards, > > Olivier