From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.droids-corp.org (zoll.droids-corp.org [94.23.50.67]) by dpdk.org (Postfix) with ESMTP id 11E7816E for ; Mon, 4 Sep 2017 14:11:22 +0200 (CEST) Received: from lfbn-1-18623-73.w90-103.abo.wanadoo.fr ([90.103.154.73] helo=droids-corp.org) by mail.droids-corp.org with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256) (Exim 4.84_2) (envelope-from ) id 1doqJD-00083b-2Q; Mon, 04 Sep 2017 14:16:56 +0200 Received: by droids-corp.org (sSMTP sendmail emulation); Mon, 04 Sep 2017 14:11:14 +0200 Date: Mon, 4 Sep 2017 14:11:14 +0200 From: Olivier MATZ To: Santosh Shukla Cc: dev@dpdk.org, thomas@monjalon.net, jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com Message-ID: <20170904121113.jdilonuhw77c4vx7@neon> References: <20170720070613.18211-2-santosh.shukla@caviumnetworks.com> <20170815080717.9413-1-santosh.shukla@caviumnetworks.com> <20170815080717.9413-3-santosh.shukla@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170815080717.9413-3-santosh.shukla@caviumnetworks.com> User-Agent: NeoMutt/20170113 (1.7.2) Subject: Re: [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise pool handle X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 Sep 2017 12:11:22 -0000 Hi Santosh, On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote: > Now that dpdk supports more than one mempool drivers and > each mempool driver works best for specific PMD, example: > - sw ring based mempool for Intel PMD drivers > - dpaa2 HW mempool manager for dpaa2 PMD driver. > - fpa HW mempool manager for Octeontx PMD driver. > > Application like to know `preferred mempool vs PMD driver` > information in advance before port setup. > > Introducing rte_eth_dev_get_preferred_pool_ops() API, > which allows PMD driver to advertise their pool capability to application. > > Application side programing sequence would be: > > char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; > rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/); > rte_mempool_create_empty(); > rte_mempool_set_ops_byname( , pref_memppol, ); > rte_mempool_populate_default(); > > Signed-off-by: Santosh Shukla > --- > v2 --> v3: > - Updated version.map entry to DPDK_v17.11. > > v1 --> v2: > - Renamed _get_preferred_pool to _get_preferred_pool_ops(). > Per v1 review feedback, Olivier suggested to rename api > to rte_eth_dev_pool_ops_supported(), considering that 2nd param > for that api will return pool handle 'priority' for that port. > However, per v1 [1], we're opting for approach 1) where > ethdev API returns _preferred_ pool handle to application and Its upto > application to decide on policy - whether application wants to create > pool with received preferred pool handle or not. For more discussion details > on this topic refer [1]. Well, I still think it would be more flexible to have an API like rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) It supports the easy case (= one preferred mempool) without much pain, and provides a more precise manner to describe what is supported or not by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but also supporst "mempool_stack" and "mempool_ring", but "mempool_bar" won't work at all. Having only one preferred pool_ops also prevents from smoothly renaming a pool (supporting both during some time) or to have 2 names for different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc). But if the users (I guess at least Cavium and NXP) are happy with what you propose, I'm fine with it. > --- a/lib/librte_ether/rte_ethdev.c > +++ b/lib/librte_ether/rte_ethdev.c > @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, > > return 0; > } > + > +int > +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool) > +{ > + struct rte_eth_dev *dev; > + const char *tmp; > + > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); > + > + dev = &rte_eth_devices[port_id]; > + > + if (*dev->dev_ops->get_preferred_pool_ops == NULL) { > + tmp = rte_eal_mbuf_default_mempool_ops(); > + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp); > + return 0; > + } > + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool); > +} I think adding the length of the pool buffer to the function arguments would be better: only documenting that the length is RTE_MBUF_POOL_OPS_NAMESIZE looks a bit weak to me, because if one day it changes to another value, the users of the function may not notice it (no ABI/API change). One more comment: it would be helpful to have one user of this API in the example apps or testpmd. Olivier