From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 30E0CC646 for ; Wed, 15 Jun 2016 16:03:36 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP; 15 Jun 2016 07:02:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,476,1459839600"; d="scan'208";a="719475985" Received: from dhunt5-mobl.ger.corp.intel.com (HELO [10.237.220.25]) ([10.237.220.25]) by FMSMGA003.fm.intel.com with ESMTP; 15 Jun 2016 07:02:51 -0700 To: Olivier MATZ , Jan Viktorin References: <1465919341-3209-1-git-send-email-david.hunt@intel.com> <1465976824-83823-1-git-send-email-david.hunt@intel.com> <20160615121358.5ef9f142@pcviktorin.fit.vutbr.cz> <57614043.9090603@intel.com> <57614402.6020708@6wind.com> <57614C41.1040107@intel.com> <57615D1F.40700@6wind.com> Cc: dev@dpdk.org, jerin.jacob@caviumnetworks.com, shreyansh.jain@nxp.com From: "Hunt, David" Message-ID: <5761600A.7030801@intel.com> Date: Wed, 15 Jun 2016 15:02:50 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <57615D1F.40700@6wind.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v12 0/3] mempool: add external mempool manager X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Jun 2016 14:03:36 -0000 On 15/6/2016 2:50 PM, Olivier MATZ wrote: > Hi David, > > On 06/15/2016 02:38 PM, Hunt, David wrote: >> >> >> On 15/6/2016 1:03 PM, Olivier MATZ wrote: >>> Hi, >>> >>> On 06/15/2016 01:47 PM, Hunt, David wrote: >>>> >>>> >>>> On 15/6/2016 11:13 AM, Jan Viktorin wrote: >>>>> Hi, >>>>> >>>>> I've got one last question. Initially, I was interested in creating >>>>> my own external memory provider based on a Linux Kernel driver. >>>>> So, I've got an opened file descriptor that points to a device which >>>>> can mmap a memory regions for me. >>>>> >>>>> ... >>>>> int fd = open("/dev/uio0" ...); >>>>> ... >>>>> rte_mempool *pool = rte_mempool_create_empty(...); >>>>> rte_mempool_set_ops_byname(pool, "uio_allocator_ops"); >>>>> >>>>> I am not sure how to pass the file descriptor pointer. I thought it >>>>> would >>>>> be possible by the rte_mempool_alloc but it's not... Is it possible >>>>> to solve this case? >>>>> >>>>> The allocator is device-specific. >>>>> >>>>> Regards >>>>> Jan >>>> >>>> This particular use case is not covered. >>>> >>>> We did discuss this before, and an opaque pointer was proposed, but >>>> did >>>> not make it in. >>>> http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821 >>>> (and following emails in that thread) >>>> >>>> So, the options for this use case are as follows: >>>> 1. Use the pool_data to pass data in to the alloc, then set the >>>> pool_data pointer before coming back from alloc. (It's a bit of a >>>> hack, >>>> but means no code change). >>>> 2. Add an extra parameter to the alloc function. The simplest way I >>>> can >>>> think of doing this is to >>>> take the *opaque passed into rte_mempool_populate_phys, and pass it on >>>> into the alloc function. >>>> This will have minimal impact on the public API,s as there is >>>> already an >>>> opaque there in the _populate_ funcs, we're just >>>> reusing it for the alloc. >>>> >>>> Do others think option 2 is OK to add this at this late stage? Even if >>>> the patch set has already been ACK'd? >>> >>> Jan's use-case looks to be relevant. >>> >>> What about changing: >>> >>> rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name) >>> >>> into: >>> >>> rte_mempool_set_ops(struct rte_mempool *mp, const char *name, >>> void *opaque) >>> >>> ? >>> >>> The opaque pointer would be saved in mempool structure, and used >>> when the mempool is populated (calling mempool_ops_alloc). >>> The type of the structure pointed by the opaque has to be defined >>> (and documented) into each mempool_ops manager. >>> >> >> Yes, that was another option, which has the additional impact of >> needing an >> opaque added to the mempool struct. If we use the opaque from the >> _populate_ >> function, we use it straight away in the alloc, no storage needed. >> >> Also, do you think we need to go ahead with this change, or can we add >> it later as an >> improvement? > > The opaque in populate_phys() is already used for something else > (i.e. the argument for the free callback of the memory chunk). > I'm afraid it could cause confusion to have it used for 2 different > things. > > About the change, I think it could be good to have it in 16.11, > because it will probably change the API, and we should avoid to > change it each version ;) > > So I'd vote to have it in the patchset for consistency. Sure, we should avoid breaking API just after we created it. :) OK, here's a slightly different proposal. If we add a string, to the _ops_byname, yes, that will work for Jan's case. However, if we add a void*, that allow us the flexibility of passing anything we want. We can then store the void* in the mempool struct as void *pool_config, so that when the alloc gets called, it can access whatever is stored at *pool_config and do the correct initialisation/allocation. In Jan's use case, this can simply be typecast to a string. In future cases, it can be a struct, which could including new flags. I think that change is minimal enough to be low risk at this stage. Thoughts? Dave.