From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.droids-corp.org (zoll.droids-corp.org [94.23.50.67]) by dpdk.org (Postfix) with ESMTP id F0FC97CC2 for ; Mon, 25 Sep 2017 12:24:49 +0200 (CEST) Received: from lfbn-lil-1-182-75.w90-45.abo.wanadoo.fr ([90.45.31.75] helo=droids-corp.org) by mail.droids-corp.org with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256) (Exim 4.84_2) (envelope-from ) id 1dwQek-0003zT-Ag; Mon, 25 Sep 2017 12:30:31 +0200 Received: by droids-corp.org (sSMTP sendmail emulation); Mon, 25 Sep 2017 12:24:40 +0200 Date: Mon, 25 Sep 2017 12:24:40 +0200 From: Olivier MATZ To: Hemant Agrawal Cc: santosh.shukla@caviumnetworks.com, dev@dpdk.org, jerin.jacob@caviumnetworks.com Message-ID: <20170925102439.jpk4qynni6hmonif@platinum> References: <1499170968-23016-1-git-send-email-hemant.agrawal@nxp.com> <08774e28-430e-4680-587a-909a5f44c5d6@nxp.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <08774e28-430e-4680-587a-909a5f44c5d6@nxp.com> User-Agent: NeoMutt/20170113 (1.7.2) Subject: Re: [dpdk-dev] [PATCH 0/2] Multiple Pktmbuf mempool support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 25 Sep 2017 10:24:50 -0000 Hi Hemant, On Fri, Sep 22, 2017 at 12:43:36PM +0530, Hemant Agrawal wrote: > On 7/4/2017 5:52 PM, Hemant Agrawal wrote: > > This patch is in addition to the patch series[1] submitted by > > Santosh to allow application to set mempool handle. > > > > The existing pktmbuf pool create api only support the internal use > > of "CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS", which assumes that the HW > > can only support one type of mempool for packet mbuf. > > > > There are multiple additional requirements. > > > > 1. The platform independent image detects the underlying bus, > > based on the bus and resource detected, it will dynamically select > > the default mempool. This need not to have the application knowlege. > > e.g. DPAA2 and DPAA are two different NXP platforms, based on the > > underlying platform the default ops for mbuf can be dpaa or dpaa2. > > Application should work seemlessly whether it is running on dpaa or dpaa2. > > > > 2.Platform support more than one type of mempool for pktmbuf, > > depend on the availability of resource, the driver can decide one > > of the mempool for the current packet mbuf request. > > > > 3. In case of where application is providing the mempool, as proposed > > in [1], the check preference logic will be bypassed and application > > config will take priority. > > > > [1]Allow application set mempool handle > > http://dpdk.org/ml/archives/dev/2017-June/067022.html > > > > Hemant Agrawal (2): > > mempool: check the support for the given mempool > > mbuf: add support for preferred mempool list > > > > config/common_base | 2 ++ > > lib/librte_mbuf/rte_mbuf.c | 28 +++++++++++++++++++++++----- > > lib/librte_mempool/rte_mempool.h | 24 ++++++++++++++++++++++++ > > lib/librte_mempool/rte_mempool_ops.c | 32 ++++++++++++++++++++++++++++++++ > > 4 files changed, 81 insertions(+), 5 deletions(-) > > > > Hi Olivier, > Any opinion on this patchset? Sorry for the lack of feedback, for some reason I missed the initial mails. I don't quite like the idea of having hardcoded config: CONFIG_RTE_MBUF_BACKUP_MEMPOOL_OPS_1="" CONFIG_RTE_MBUF_BACKUP_MEMPOOL_OPS_2="" Also, I'm a bit reserved about rte_mempool_ops_check_support(): it can return "supported" but the creation of the pool can still fail later due to the creation parameters (element count/size, mempool flags, ...). The ordering preference of these mempool ops may also depend on the configuration (enabled ports for instance) or user choices. Let me propose you another approach to (I hope) solve your issue, based on Santosh's patches [1]. We can introduce a new helper that could be used by applications to dynamically select the best mempool ops. It could be something similar to the pseudo-code I've written in [3]. // return an array pool ops name, ordered by preference pool_ops = get_ordered_pool_ops_list() Then try to create the first pool, if it fails, try the next, until it is succesful. That said, it is difficult to take the good decision inside rte_pktmbuf_pool_create() because we don't have all the information. Sergio and Jerin suggested to introduce a new argument ops_name to rte_pktmbuf_pool_create() [2]. From what I remember, this is also something you were in favor of, right? In that case, we can move the difficult task of choosing the right mempool inside the application. Comments? Olivier [1] http://dpdk.org/dev/patchwork/patch/28596/ [2] http://dpdk.org/ml/archives/dev/2017-September/074489.html [3] http://dpdk.org/dev/patchwork/patch/27610/