From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.droids-corp.org (zoll.droids-corp.org [94.23.50.67]) by dpdk.org (Postfix) with ESMTP id 3620429D6 for ; Tue, 5 Sep 2017 09:47:36 +0200 (CEST) Received: from lfbn-1-18623-73.w90-103.abo.wanadoo.fr ([90.103.154.73] helo=droids-corp.org) by mail.droids-corp.org with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256) (Exim 4.84_2) (envelope-from ) id 1dp8fT-0001D6-Gk; Tue, 05 Sep 2017 09:53:08 +0200 Received: by droids-corp.org (sSMTP sendmail emulation); Tue, 05 Sep 2017 09:47:26 +0200 Date: Tue, 5 Sep 2017 09:47:26 +0200 From: Olivier MATZ To: Sergio Gonzalez Monroy Cc: Santosh Shukla , dev@dpdk.org, thomas@monjalon.net, jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com Message-ID: <20170905074724.lmo3xitraq4cxwg2@neon> References: <20170720070613.18211-2-santosh.shukla@caviumnetworks.com> <20170815080717.9413-1-santosh.shukla@caviumnetworks.com> <5724dc82-952a-8ca5-2f99-f463a54ec07d@intel.com> <20170904133437.ym3cd7n3dswt4yjb@neon> <1b3482e9-7e49-5f15-afc9-56c0c098c1ab@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1b3482e9-7e49-5f15-afc9-56c0c098c1ab@intel.com> User-Agent: NeoMutt/20170113 (1.7.2) Subject: Re: [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Sep 2017 07:47:36 -0000 On Mon, Sep 04, 2017 at 03:24:38PM +0100, Sergio Gonzalez Monroy wrote: > Hi Olivier, > > On 04/09/2017 14:34, Olivier MATZ wrote: > > Hi Sergio, > > > > On Mon, Sep 04, 2017 at 10:41:56AM +0100, Sergio Gonzalez Monroy wrote: > > > On 15/08/2017 09:07, Santosh Shukla wrote: > > > > * Application programming sequencing would be > > > > char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; > > > > rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */); > > > > rte_mempool_create_empty(); > > > > rte_mempool_set_ops_byname( , pref_memppol, ); > > > > rte_mempool_populate_default(); > > > What about introducing an API like: > > > rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool); > > > > > > I think that API would help for the case the application wants an mbuf pool > > > with ie. stack handler. > > > Sure we can do the empty/set_ops/populate sequence, but the only thing we > > > want to change from default pktmbuf_pool_create API is the pool handler. > > > > > > Application just needs to decide the ops handler to use, either default or > > > one suggested by PMD? > > > > > > I think ideally we would have similar APIs: > > > - rte_mempool_create_with_ops (...) > > > - rte_memppol_xmem_create_with_ops (...) > > Today, we may only want to change the mempool handler, but if we > > need to change something else tomorrow, we would need to add another > > parameter again, breaking the ABI. > > > > If we pass a config structure, adding a new field in it would also break the > > ABI, except if the structure is opaque, with accessors. These accessors would be > > functions (ex: mempool_cfg_new, mempool_cfg_set_pool_ops, ...). This is not so > > much different than what we have now. > > > > The advantage I can see of working on a config structure instead of directly on > > a mempool is that the api can be reused to build a default config. > > > > That said, I think it's quite orthogonal to this patch since we still require > > the ethdev api. > > Fair enough. > > Just to double check that we are on the same page: > - rte_mempool_create is just there for backwards compatibility and a > sequence of create_empty -> set_ops (optional) ->init -> populate_default -> > obj_iter (optional) is the recommended way of creating mempools. Yes, I think rte_mempool_create() has too many arguments. > - if application wants to use rte_mempool_xmem_create with different pool > handler needs to replicate function and add set_ops step. Yes. And now that xen support is going to be removed, I'm wondering if this function is still required, given the API is not that easy to use. Calling rte_mempool_populate_phys() several times looks more flexible. But this is also another topic. > Now if rte_pktmbuf_pool_create is still the preferred mechanism for > applications to create mbuf pools, wouldn't it make sense to offer the > option of either pass the ops_name as suggested before or for the > application to just set a different pool handler? I understand the it is > either breaking API or introducing a new API, but is the solution to > basically "copy" the whole function in the application and add an optional > step (set_ops)? I was quite reticent about introducing rte_pktmbuf_pool_create_with_ops() because for the same reasons, we would also want to introduce functions to create a mempool that use a different pktmbuf_init() or pool_init() callback, or to create the pool in external memory, ... and we would end up with a functions with too many arguments. I like the approach of having several simple functions, because it's easier to read (even if the code is longer), and it's easily extensible. Now if we feel that the mempool ops is more important than the other parameters, we can consider to add it in rte_pktmbuf_pool_create(). > With these patches we can: > - change the default pool handler with EAL option, which does *not* allow > for pktmbuf_pool_create with different handlers. > - We can check the PMDs preferred/supported handlers but then we need to > implement function with whole sequence, cannot use pktmbuf_pool_create. > > It looks to me then that any application that wants to use different pool > handlers than default/ring need to implement their own create_pool with > set_ops.