From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <shreyansh.jain@nxp.com>
Received: from emea01-am1-obe.outbound.protection.outlook.com
 (mail-am1on0080.outbound.protection.outlook.com [157.56.112.80])
 by dpdk.org (Postfix) with ESMTP id 4DE3C2B85
 for <dev@dpdk.org>; Fri, 10 Jun 2016 13:37:38 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version;
 bh=xJq1PlkDZn8juigbMI8rO0ev3CQk2I7+FlehMdi+t8M=;
 b=lB4+Bwq8iyUxVlSNEVDwPf559GbAH+ZBZwSP294cadJSAxD8hM18Jz2RlSRo+ofYig3oYlaxs1BKlwbf+FXdO/rjPnotMXpHt2efz7lb7oeQcNr2brrVBe6mVOl/nOKzuTDYQJ9TOCNPpEKfexcRmuEDxFwRITK+mDmPwhtMuLI=
Received: from DB5PR0401MB2054.eurprd04.prod.outlook.com (10.166.11.137) by
 DB5PR0401MB2055.eurprd04.prod.outlook.com (10.166.11.138) with Microsoft SMTP
 Server (TLS) id 15.1.517.8; Fri, 10 Jun 2016 11:37:37 +0000
Received: from DB5PR0401MB2054.eurprd04.prod.outlook.com ([10.166.11.137]) by
 DB5PR0401MB2054.eurprd04.prod.outlook.com ([10.166.11.137]) with
 mapi id 15.01.0517.005; Fri, 10 Jun 2016 11:37:37 +0000
From: Shreyansh Jain <shreyansh.jain@nxp.com>
To: Olivier Matz <olivier.matz@6wind.com>
CC: "dev@dpdk.org" <dev@dpdk.org>, "jerin.jacob@caviumnetworks.com"
 <jerin.jacob@caviumnetworks.com>, Jan Viktorin <viktorin@rehivetech.com>,
 "Hunt, David" <david.hunt@intel.com>
Thread-Topic: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
 operations
Thread-Index: AQHRwJ6WYtTJj5Q9pk+CUYpxLcIab5/fWYJQgAGK+ACAADqLAIABM3UAgABCuWA=
Date: Fri, 10 Jun 2016 11:37:34 +0000
Deferred-Delivery: Fri, 10 Jun 2016 11:36:51 +0000
Message-ID: <DB5PR0401MB20548FBDF87EC7F9761D4E0A90500@DB5PR0401MB2054.eurprd04.prod.outlook.com>
References: <1464874043-67467-1-git-send-email-david.hunt@intel.com>
 <1464965906-108927-1-git-send-email-david.hunt@intel.com>
 <1464965906-108927-2-git-send-email-david.hunt@intel.com>
 <DB5PR0401MB20545CC8494FB0965C338EE2905C0@DB5PR0401MB2054.eurprd04.prod.outlook.com>
 <5756931C.70805@intel.com>
 <DB5PR0401MB2054E6778695E327F48AE6E8905E0@DB5PR0401MB2054.eurprd04.prod.outlook.com>
 <57593962.6010802@intel.com>
 <20160609150918.502322c8@pcviktorin.fit.vutbr.cz>
 <575A6C68.3090007@6wind.com>
In-Reply-To: <575A6C68.3090007@6wind.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: spf=none (sender IP is )
 smtp.mailfrom=shreyansh.jain@nxp.com; 
x-originating-ip: [192.88.169.1]
x-ms-office365-filtering-correlation-id: 493dbc94-2272-4ed9-f4e1-08d391239f4c
x-microsoft-exchange-diagnostics: 1; DB5PR0401MB2055;
 6:zGIftM1tU60bpYJ56amsZ1a1jq0Da2WDluvCW9JtTA7CglxIRNc788eXIZ0Q2hTz9FmhCtL5fH0ZAwC7RZJ4jgx0PPei0dt12W7uNllt34TuBntxpZbPa+AQ6KwtAkDouOULkvgmcnBEqYDbU42SEFEhN6urK8rcwwDVF8woxhnGZ1YIK0l22ZUkwF5MjDHJgu8c/ez3YOqSxxKLEOaIe6BrYVYteTqWvzC03wN4+II5kTDXQDRhG+Aysqdt3sjrXawnCGKgpH5tOD4IzcOHs4V+jak7na9WTMWNVgQKgY7jR7xsu9Sgo71Yt24LZ9ca;
 5:U9eAkQtNFqWeT9O9QsQOpZLgV7DTkQxawuifxprygKiUl2Km1ZAYUW4JbQBa1r2Vq5lX2zRgzOLFHUx4wtXcjjbUK3Wqv0n9pSW9RXS/EL66XGQ4Xx2NjDN8GJTTuIRNDt/BtzGZZQBwIahzDLxzYA==;
 24:6rBLQX4vs99+9+nkovdJXJNVo7QhDwc3sr3MSs2Q0RWC3yEnniiSSc3fsWaSCUdv2ZZELjNlyALNB212KXbrFUlDdGgtvuvrOKikL48OlMo=;
 7:J+clOn4i5dIpSi8Mez96AJ99qZ1MiwuOpKkQo1NW+DcJ+2/waJEmMkzAHCAVtapHX0nT9c6C2YsxVBNwCWiKntUmB9srg2ftHW3rslf6WSop3i12V/J4rzX+1g+Rx9Ra0zq1qh2e44Tl58zooVAA/EtCoQec2ISbS0o3hcmbaFOHvB95ki9w8nRlcRedzDfE3GIb3YBdh9/00w7LVm9t9g==
x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:DB5PR0401MB2055;
x-microsoft-antispam-prvs: <DB5PR0401MB205550A807D2D6082D2797E390500@DB5PR0401MB2055.eurprd04.prod.outlook.com>
x-exchange-antispam-report-test: UriScan:(278428928389397)(185117386973197)(228905959029699); 
x-exchange-antispam-report-cfa-test: BCL:0; PCL:0;
 RULEID:(601004)(2401047)(8121501046)(5005006)(3002001)(10201501046)(6055026);
 SRVR:DB5PR0401MB2055; BCL:0; PCL:0; RULEID:; SRVR:DB5PR0401MB2055; 
x-forefront-prvs: 096943F07A
x-forefront-antispam-report: SFV:NSPM;
 SFS:(10009020)(6009001)(24454002)(13464003)(377454003)(199003)(189002)(68736007)(92566002)(122556002)(5002640100001)(6116002)(11100500001)(10400500002)(3660700001)(8936002)(93886004)(586003)(2950100001)(2900100001)(76576001)(189998001)(9686002)(110136002)(3846002)(19580395003)(86362001)(5008740100001)(19580405001)(77096005)(3280700002)(8676002)(81166006)(5004730100002)(87936001)(66066001)(102836003)(97736004)(106116001)(74316001)(106356001)(81156014)(54356999)(105586002)(33656002)(5003600100002)(76176999)(2906002)(4326007)(50986999)(101416001);
 DIR:OUT; SFP:1101; SCL:1; SRVR:DB5PR0401MB2055;
 H:DB5PR0401MB2054.eurprd04.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords;
 A:1; MX:1; CAT:NONE; LANG:en; CAT:NONE; 
received-spf: None (protection.outlook.com: nxp.com does not designate
 permitted sender hosts)
spamdiagnosticoutput: 1:99
spamdiagnosticmetadata: NSPM
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: nxp.com
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jun 2016 11:37:37.0894 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB5PR0401MB2055
Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
 operations
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Fri, 10 Jun 2016 11:37:38 -0000

Hi Olivier,

> -----Original Message-----
> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> Sent: Friday, June 10, 2016 1:00 PM
> To: Jan Viktorin <viktorin@rehivetech.com>; Hunt, David
> <david.hunt@intel.com>
> Cc: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org;
> jerin.jacob@caviumnetworks.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
>=20
> Hi,
>=20
> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
> >>> My suggestion is to have an additional flag,
> >>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
> >>>
> >>> ... #define MEMPOOL_F_SC_GET    0x0008 #define
> >>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
> >>>
> >>> in rte_mempool_create_empty: ... after checking the other
> >>> MEMPOOL_F_* flags...
> >>>
> >>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
> >>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >>>
> >>> And removing the redundant call to rte_mempool_set_ops_byname()
> >>> in rte_pktmbuf_create_pool().
> >>>
> >>> Thereafter, rte_pktmbuf_pool_create can be changed to:
> >>>
> >>> ... mp =3D rte_mempool_create_empty(name, n, elt_size, cache_size,
> >>> -        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> >>> +        sizeof(struct rte_pktmbuf_pool_private), socket_id, +
> >>> MEMPOOL_F_PKT_ALLOC); if (mp =3D=3D NULL) return NULL;
> >>
> >> Yes, this would simplify somewhat the creation of a pktmbuf pool,
> >> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
> >> However, I'm not sure we want to introduce a third method of
> >> creating a mempool to the developers. If we introduced this, we
> >> would then have: 1. rte_pktmbuf_pool_create() 2.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
> >> would use the configured custom handler) 3.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
> >> followed by a call to rte_mempool_set_ops_byname() (would allow
> >> several different custom handlers to be used in one application
> >>
> >> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> >
> > I am quite careful about this topic as I don't feel to be very
> > involved in all the use cases. My opinion is that the _new API_
> > should be able to cover all cases and the _old API_ should be
> > backwards compatible, however, built on top of the _new API_.
> >
> > I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
> > of the old API) should be accepted by the old API ONLY. The
> > rte_mempool_create_empty should not process them.
>=20
> The rte_mempool_create_empty() function already processes these flags
> (SC_GET, SP_PUT) as of today.
>=20
> > Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
> > rte_mempool_create_empty by this anymore.
>=20
> +1
>=20
> I think we should stop adding flags. Flags are prefered for independent
> features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
> MEMPOOL_F_SP_PUT?
>=20
> Another reason to not add this flag is the rte_mempool library
> should not be aware of mbufs. The mbuf pools rely on mempools, but
> not the contrary.

Agree - mempool should be agnostic of the mbufs using it.
But, mempool should be aware of the allocator it is using, in my opinion.

And, agree with your argument of "MEMPOOL_F_PKT_ALLOC + MEMPOOL_F_SP_PUT" -=
 it is bad semantics.

>=20
>=20
> > In overall we would get exactly 2 approaches (and not more):
> >
> > * using rte_mempool_create with flags calling the
> > rte_mempool_create_empty and rte_mempool_set_ops_byname internally
> > (so this layer can be marked as deprecated and removed in the
> > future)
>=20
> Agree. This was one of the objective of my mempool rework patchset:
> provide a more flexible API, and avoid functions with 10 to 15
> arguments.
>=20
> > * using rte_mempool_create_empty + rte_mempool_set_ops_byname -
> > allowing any customizations but with the necessity to change the
> > applications (new preferred API)
>=20
> Yes.
> And if required, maybe a third API is possible in case of mbuf pools.
> Indeed, the applications are encouraged to use rte_pktmbuf_pool_create()
> to create a pool of mbuf instead of mempool API. If an application
> wants to select specific ops for it, we could add:
>=20
>   rte_pktmbuf_pool_create_<something>(..., name)
>=20
> instead of using the mempool API.
> I think this is what Shreyansh suggests when he says:
>=20
>   It sounds fine if calls to rte_mempool_* can select an external
>   handler *optionally* - but, if we pass it as command line, it would
>   be binding (at least, semantically) for rte_pktmbuf_* calls as well.
>   Isn't it?

Er. I think I should clarify the context.
I was referring to the 'command-line-argument-for-selecting-external-mem-al=
locator' comment. I was just highlighting that probably it would cause conf=
lict with the two APIs.

But, having said that, I agree with you about "...applications are encourag=
ed to use rte_pktmbuf_pool_create() to create a pool of mbuf...".

>=20
>=20
> > So, the old applications can stay as they are (OK, with a possible
> > new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
> > have to set the ops explicitly.
> >
> > The more different ways of using those APIs we have, the greater hell
> > we have to maintain.
>=20
> I'm really not in favor of a MEMPOOL_F_PKT_ALLOC flag in mempool api.

Agree. Flags are not pretty way of handling mutually exclusive features - t=
hey are not intuitive.
Semantically cleaner API is better approach.

>=20
> I think David's patch is already a good step forward. Let's do it
> step by step. Next step is maybe to update some applications (at least
> testpmd) to select a new pool handler dynamically.

Fair enough. We can slowly make changes to all applications to show 'best p=
ractice' of creating buffer or packet pools.

>=20
> Regards,
> Olivier

-
Shreyansh