From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id F3E735F1F for ; Thu, 15 Mar 2018 13:00:11 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Mar 2018 05:00:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,310,1517904000"; d="scan'208";a="39130649" Received: from aburakov-mobl.ger.corp.intel.com (HELO [10.237.220.112]) ([10.237.220.112]) by orsmga001.jf.intel.com with ESMTP; 15 Mar 2018 05:00:09 -0700 To: Andrew Rybchenko , dev@dpdk.org Cc: Olivier MATZ References: <1516713372-10572-1-git-send-email-arybchenko@solarflare.com> <1520696382-16400-1-git-send-email-arybchenko@solarflare.com> <1520696382-16400-4-git-send-email-arybchenko@solarflare.com> <72792bc9-8e64-9a18-ce6c-ffd6777f1748@solarflare.com> <216ac4e5-2815-de23-4185-fd28210c0477@intel.com> <7873dc42-d58f-7dd2-abe0-3eac3ee5ebfb@solarflare.com> From: "Burakov, Anatoly" Message-ID: <7e266ed5-862f-8ecb-d80d-c26c7a7e4e4c@intel.com> Date: Thu, 15 Mar 2018 12:00:08 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <7873dc42-d58f-7dd2-abe0-3eac3ee5ebfb@solarflare.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v1 3/9] mempool: remove callback to get capabilities X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Mar 2018 12:00:12 -0000 On 15-Mar-18 11:49 AM, Andrew Rybchenko wrote: > On 03/15/2018 12:48 PM, Burakov, Anatoly wrote: >> On 14-Mar-18 5:24 PM, Andrew Rybchenko wrote: >>> On 03/14/2018 07:53 PM, Burakov, Anatoly wrote: >>>> On 14-Mar-18 4:12 PM, Andrew Rybchenko wrote: >>>>> On 03/14/2018 05:40 PM, Burakov, Anatoly wrote: >>>>>> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote: >>>>>>> The callback was introduced to let generic code to know octeontx >>>>>>> mempool driver requirements to use single physically contiguous >>>>>>> memory chunk to store all objects and align object address to >>>>>>> total object size. Now these requirements are met using a new >>>>>>> callbacks to calculate required memory chunk size and to populate >>>>>>> objects using provided memory chunk. >>>>>>> >>>>>>> These capability flags are not used anywhere else. >>>>>>> >>>>>>> Restricting capabilities to flags is not generic and likely to >>>>>>> be insufficient to describe mempool driver features. If required >>>>>>> in the future, API which returns structured information may be >>>>>>> added. >>>>>>> >>>>>>> Signed-off-by: Andrew Rybchenko >>>>>>> --- >>>>>> >>>>>> Just a general comment - it is not enough to describe minimum >>>>>> memchunk requirements. With memory hotplug patchset that's >>>>>> hopefully getting merged in 18.05, memzones will no longer be >>>>>> guaranteed to be IOVA-contiguous. So, if a driver requires its >>>>>> mempool to not only be populated from a single memzone, but a >>>>>> single *physically contiguous* memzone, going by only callbacks >>>>>> will not do, because whether or not something should be a single >>>>>> memzone says nothing about whether this memzone has to also be >>>>>> IOVA-contiguous. >>>>>> >>>>>> So i believe this needs to stay in one form or another. >>>>>> >>>>>> (also it would be nice to have a flag that a user could pass to >>>>>> mempool_create that would force memzone reservation be >>>>>> IOVA-contiguous, but that's a topic for another conversation. >>>>>> prime user for this would be KNI.) >>>>> >>>>> I think that min_chunk_size should be treated as IOVA-contiguous. >>>> >>>> Why? It's perfectly reasonable to e.g. implement a software mempool >>>> driver that would perform some optimizations due to all objects >>>> being in the same VA-contiguous memzone, yet not be dependent on >>>> underlying physical memory layout. These are two separate concerns IMO. >>> >>> It looks like there is some misunderstanding here or I simply don't >>> understand your point. >>> Above I mean that driver should be able to advertise its requirements >>> on IOVA-contiguous regions. >>> If driver do not care about physical memory layout, no problem. >> >> Please correct me if i'm wrong, but my understanding was that you >> wanted to use min_chunk as a way to express minimum requirements for >> IOVA-contiguous memory. If i understood you correctly, i don't think >> that's the way to go because there could be valid use cases where a >> mempool driver would like to advertise min_chunk_size to be equal to >> its total size (i.e. allocate everything in a single memzone), yet not >> require that memzone to be IOVA-contiguous. I think these are two >> different concerns, and one does not, and should not imply the other. > > Aha, you're saying that virtual-contiguous and IOVA-contiguous > requirements are different things that it could be usecases where > virtual contiguous is important but IOVA-contiguos is not required. It > is perfectly fine. > As I understand IOVA-contiguous (physical) typically means > virtual-contiguous as well. Requirements to have everything virtually > contiguous and some blocks physically contiguous are unlikely. So, it > may be reduced to either virtual or physical contiguous. If mempool does > not care about physical contiguous at all, MEMPOOL_F_NO_PHYS_CONTIG flag > should be used and min_chunk_size should mean virtual contiguous > requirements. If mempool requires physical contiguous objects, there is > *no* MEMPOOL_F_NO_PHYS_CONTIG flag and min_chunk_size means physical > contiguous requirements. > Fair point. I think we're in agreement now :) This will need to be documented then. -- Thanks, Anatoly