From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id D962D7CA9 for ; Mon, 4 Sep 2017 16:24:41 +0200 (CEST) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 04 Sep 2017 07:24:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,475,1498546800"; d="scan'208";a="145311323" Received: from smonroyx-mobl.ger.corp.intel.com (HELO [10.237.221.28]) ([10.237.221.28]) by orsmga005.jf.intel.com with ESMTP; 04 Sep 2017 07:24:39 -0700 To: Olivier MATZ References: <20170720070613.18211-2-santosh.shukla@caviumnetworks.com> <20170815080717.9413-1-santosh.shukla@caviumnetworks.com> <5724dc82-952a-8ca5-2f99-f463a54ec07d@intel.com> <20170904133437.ym3cd7n3dswt4yjb@neon> Cc: Santosh Shukla , dev@dpdk.org, thomas@monjalon.net, jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com From: Sergio Gonzalez Monroy Message-ID: <1b3482e9-7e49-5f15-afc9-56c0c098c1ab@intel.com> Date: Mon, 4 Sep 2017 15:24:38 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1 MIME-Version: 1.0 In-Reply-To: <20170904133437.ym3cd7n3dswt4yjb@neon> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 Sep 2017 14:24:42 -0000 Hi Olivier, On 04/09/2017 14:34, Olivier MATZ wrote: > Hi Sergio, > > On Mon, Sep 04, 2017 at 10:41:56AM +0100, Sergio Gonzalez Monroy wrote: >> On 15/08/2017 09:07, Santosh Shukla wrote: >>> v3: >>> - Rebased on top of v17.11-rc0 >>> - Updated version.map entry to v17.11. >>> >>> v2: >>> >>> DPDK has support for hw and sw mempool. Those mempool >>> can work optimal for specific PMD's. >>> Example: >>> sw ring based PMD for Intel NICs. >>> HW mempool manager dpaa2 for dpaa2 PMD. >>> HW mempool manager fpa for octeontx PMD. >>> >>> There could be a use-case where different vendor NIC's used >>> on the same platform and User like to configure mempool in such a way that >>> each of those NIC's should use their preferred mempool(For performance reasons). >>> >>> Current mempool infrastrucure don't support such use-case. >>> >>> This patchset tries to address that problem in 2 steps: >>> >>> 0) Allowing user to dynamically configure mempool handle by >>> passing pool handle as eal arg to `--mbuf-pool-ops=`. >>> >>> 1) Allowing PMD's to advertise their preferred pool to an application. >>> From an application point of view: >>> - The application must ask PMD about their preferred pool. >>> - PMD to respond back with preferred pool otherwise >>> CONFIG_RTE_MEMPOOL_DEFAULT_OPS will be used for that PMD. >>> >>> * Application programming sequencing would be >>> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; >>> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */); >>> rte_mempool_create_empty(); >>> rte_mempool_set_ops_byname( , pref_memppol, ); >>> rte_mempool_populate_default(); >> What about introducing an API like: >> rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool); >> >> I think that API would help for the case the application wants an mbuf pool >> with ie. stack handler. >> Sure we can do the empty/set_ops/populate sequence, but the only thing we >> want to change from default pktmbuf_pool_create API is the pool handler. >> >> Application just needs to decide the ops handler to use, either default or >> one suggested by PMD? >> >> I think ideally we would have similar APIs: >> - rte_mempool_create_with_ops (...) >> - rte_memppol_xmem_create_with_ops (...) > Today, we may only want to change the mempool handler, but if we > need to change something else tomorrow, we would need to add another > parameter again, breaking the ABI. > > If we pass a config structure, adding a new field in it would also break the > ABI, except if the structure is opaque, with accessors. These accessors would be > functions (ex: mempool_cfg_new, mempool_cfg_set_pool_ops, ...). This is not so > much different than what we have now. > > The advantage I can see of working on a config structure instead of directly on > a mempool is that the api can be reused to build a default config. > > That said, I think it's quite orthogonal to this patch since we still require > the ethdev api. > > Olivier Fair enough. Just to double check that we are on the same page: - rte_mempool_create is just there for backwards compatibility and a sequence of create_empty -> set_ops (optional) ->init -> populate_default -> obj_iter (optional) is the recommended way of creating mempools. - if application wants to use re_mempool_xmem_create with different pool handler needs to replicate function and add set_ops step. Now if rte_pktmbuf_pool_create is still the preferred mechanism for applications to create mbuf pools, wouldn't it make sense to offer the option of either pass the ops_name as suggested before or for the application to just set a different pool handler? I understand the it is either breaking API or introducing a new API, but is the solution to basically "copy" the whole function in the application and add an optional step (set_ops)? With these patches we can: - change the default pool handler with EAL option, which does *not* allow for pktmbuf_pool_create with different handlers. - We can check the PMDs preferred/supported handlers but then we need to implement function with whole sequence, cannot use pktmbuf_pool_create. It looks to me then that any application that wants to use different pool handlers than default/ring need to implement their own create_pool with set_ops. Thanks, Sergio