From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.droids-corp.org (zoll.droids-corp.org [94.23.50.67]) by dpdk.org (Postfix) with ESMTP id A09B7376C for ; Mon, 4 Sep 2017 15:35:03 +0200 (CEST) Received: from lfbn-1-18623-73.w90-103.abo.wanadoo.fr ([90.103.154.73] helo=droids-corp.org) by mail.droids-corp.org with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256) (Exim 4.84_2) (envelope-from ) id 1dorcB-0008A7-J6; Mon, 04 Sep 2017 15:40:36 +0200 Received: by droids-corp.org (sSMTP sendmail emulation); Mon, 04 Sep 2017 15:34:55 +0200 Date: Mon, 4 Sep 2017 15:34:55 +0200 From: Olivier MATZ To: Sergio Gonzalez Monroy Cc: Santosh Shukla , dev@dpdk.org, thomas@monjalon.net, jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com Message-ID: <20170904133437.ym3cd7n3dswt4yjb@neon> References: <20170720070613.18211-2-santosh.shukla@caviumnetworks.com> <20170815080717.9413-1-santosh.shukla@caviumnetworks.com> <5724dc82-952a-8ca5-2f99-f463a54ec07d@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5724dc82-952a-8ca5-2f99-f463a54ec07d@intel.com> User-Agent: NeoMutt/20170113 (1.7.2) Subject: Re: [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 Sep 2017 13:35:03 -0000 Hi Sergio, On Mon, Sep 04, 2017 at 10:41:56AM +0100, Sergio Gonzalez Monroy wrote: > On 15/08/2017 09:07, Santosh Shukla wrote: > > v3: > > - Rebased on top of v17.11-rc0 > > - Updated version.map entry to v17.11. > > > > v2: > > > > DPDK has support for hw and sw mempool. Those mempool > > can work optimal for specific PMD's. > > Example: > > sw ring based PMD for Intel NICs. > > HW mempool manager dpaa2 for dpaa2 PMD. > > HW mempool manager fpa for octeontx PMD. > > > > There could be a use-case where different vendor NIC's used > > on the same platform and User like to configure mempool in such a way that > > each of those NIC's should use their preferred mempool(For performance reasons). > > > > Current mempool infrastrucure don't support such use-case. > > > > This patchset tries to address that problem in 2 steps: > > > > 0) Allowing user to dynamically configure mempool handle by > > passing pool handle as eal arg to `--mbuf-pool-ops=`. > > > > 1) Allowing PMD's to advertise their preferred pool to an application. > > From an application point of view: > > - The application must ask PMD about their preferred pool. > > - PMD to respond back with preferred pool otherwise > > CONFIG_RTE_MEMPOOL_DEFAULT_OPS will be used for that PMD. > > > > * Application programming sequencing would be > > char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; > > rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */); > > rte_mempool_create_empty(); > > rte_mempool_set_ops_byname( , pref_memppol, ); > > rte_mempool_populate_default(); > > What about introducing an API like: > rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool); > > I think that API would help for the case the application wants an mbuf pool > with ie. stack handler. > Sure we can do the empty/set_ops/populate sequence, but the only thing we > want to change from default pktmbuf_pool_create API is the pool handler. > > Application just needs to decide the ops handler to use, either default or > one suggested by PMD? > > I think ideally we would have similar APIs: > - rte_mempool_create_with_ops (...) > - rte_memppol_xmem_create_with_ops (...) Today, we may only want to change the mempool handler, but if we need to change something else tomorrow, we would need to add another parameter again, breaking the ABI. If we pass a config structure, adding a new field in it would also break the ABI, except if the structure is opaque, with accessors. These accessors would be functions (ex: mempool_cfg_new, mempool_cfg_set_pool_ops, ...). This is not so much different than what we have now. The advantage I can see of working on a config structure instead of directly on a mempool is that the api can be reused to build a default config. That said, I think it's quite orthogonal to this patch since we still require the ethdev api. Olivier