From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 794E25F5F for ; Thu, 15 Mar 2018 13:44:43 +0100 (CET) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1-us3.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id A09D6B40079; Thu, 15 Mar 2018 12:44:41 +0000 (UTC) Received: from [192.168.38.17] (84.52.114.114) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Thu, 15 Mar 2018 05:44:38 -0700 To: "Burakov, Anatoly" , CC: Olivier MATZ References: <1516713372-10572-1-git-send-email-arybchenko@solarflare.com> <1520696382-16400-1-git-send-email-arybchenko@solarflare.com> <1520696382-16400-4-git-send-email-arybchenko@solarflare.com> <72792bc9-8e64-9a18-ce6c-ffd6777f1748@solarflare.com> <216ac4e5-2815-de23-4185-fd28210c0477@intel.com> <7873dc42-d58f-7dd2-abe0-3eac3ee5ebfb@solarflare.com> <7e266ed5-862f-8ecb-d80d-c26c7a7e4e4c@intel.com> From: Andrew Rybchenko Message-ID: Date: Thu, 15 Mar 2018 15:44:34 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <7e266ed5-862f-8ecb-d80d-c26c7a7e4e4c@intel.com> Content-Language: en-GB X-Originating-IP: [84.52.114.114] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ocex03.SolarFlarecom.com (10.20.40.36) X-MDID: 1521117882-utJQ9KfvVF8a Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [PATCH v1 3/9] mempool: remove callback to get capabilities X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Mar 2018 12:44:44 -0000 On 03/15/2018 03:00 PM, Burakov, Anatoly wrote: > On 15-Mar-18 11:49 AM, Andrew Rybchenko wrote: >> On 03/15/2018 12:48 PM, Burakov, Anatoly wrote: >>> On 14-Mar-18 5:24 PM, Andrew Rybchenko wrote: >>>> On 03/14/2018 07:53 PM, Burakov, Anatoly wrote: >>>>> On 14-Mar-18 4:12 PM, Andrew Rybchenko wrote: >>>>>> On 03/14/2018 05:40 PM, Burakov, Anatoly wrote: >>>>>>> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote: >>>>>>>> The callback was introduced to let generic code to know octeontx >>>>>>>> mempool driver requirements to use single physically contiguous >>>>>>>> memory chunk to store all objects and align object address to >>>>>>>> total object size. Now these requirements are met using a new >>>>>>>> callbacks to calculate required memory chunk size and to populate >>>>>>>> objects using provided memory chunk. >>>>>>>> >>>>>>>> These capability flags are not used anywhere else. >>>>>>>> >>>>>>>> Restricting capabilities to flags is not generic and likely to >>>>>>>> be insufficient to describe mempool driver features. If required >>>>>>>> in the future, API which returns structured information may be >>>>>>>> added. >>>>>>>> >>>>>>>> Signed-off-by: Andrew Rybchenko >>>>>>>> --- >>>>>>> >>>>>>> Just a general comment - it is not enough to describe minimum >>>>>>> memchunk requirements. With memory hotplug patchset that's >>>>>>> hopefully getting merged in 18.05, memzones will no longer be >>>>>>> guaranteed to be IOVA-contiguous. So, if a driver requires its >>>>>>> mempool to not only be populated from a single memzone, but a >>>>>>> single *physically contiguous* memzone, going by only callbacks >>>>>>> will not do, because whether or not something should be a single >>>>>>> memzone says nothing about whether this memzone has to also be >>>>>>> IOVA-contiguous. >>>>>>> >>>>>>> So i believe this needs to stay in one form or another. >>>>>>> >>>>>>> (also it would be nice to have a flag that a user could pass to >>>>>>> mempool_create that would force memzone reservation be >>>>>>> IOVA-contiguous, but that's a topic for another conversation. >>>>>>> prime user for this would be KNI.) >>>>>> >>>>>> I think that min_chunk_size should be treated as IOVA-contiguous. >>>>> >>>>> Why? It's perfectly reasonable to e.g. implement a software >>>>> mempool driver that would perform some optimizations due to all >>>>> objects being in the same VA-contiguous memzone, yet not be >>>>> dependent on underlying physical memory layout. These are two >>>>> separate concerns IMO. >>>> >>>> It looks like there is some misunderstanding here or I simply don't >>>> understand your point. >>>> Above I mean that driver should be able to advertise its >>>> requirements on IOVA-contiguous regions. >>>> If driver do not care about physical memory layout, no problem. >>> >>> Please correct me if i'm wrong, but my understanding was that you >>> wanted to use min_chunk as a way to express minimum requirements for >>> IOVA-contiguous memory. If i understood you correctly, i don't think >>> that's the way to go because there could be valid use cases where a >>> mempool driver would like to advertise min_chunk_size to be equal to >>> its total size (i.e. allocate everything in a single memzone), yet >>> not require that memzone to be IOVA-contiguous. I think these are >>> two different concerns, and one does not, and should not imply the >>> other. >> >> Aha, you're saying that virtual-contiguous and IOVA-contiguous >> requirements are different things that it could be usecases where >> virtual contiguous is important but IOVA-contiguos is not required. >> It is perfectly fine. >> As I understand IOVA-contiguous (physical) typically means >> virtual-contiguous as well. Requirements to have everything virtually >> contiguous and some blocks physically contiguous are unlikely. So, it >> may be reduced to either virtual or physical contiguous. If mempool >> does not care about physical contiguous at all, >> MEMPOOL_F_NO_PHYS_CONTIG flag should be used and min_chunk_size >> should mean virtual contiguous requirements. If mempool requires >> physical contiguous objects, there is *no* MEMPOOL_F_NO_PHYS_CONTIG >> flag and min_chunk_size means physical contiguous requirements. >> > > Fair point. I think we're in agreement now :) This will need to be > documented then. OK, I'll do. I don't mind to rebase mine patch series on top of yours, but I'd like to do it a bit later when yours is closer to final version or even applied - it has really many prerequisites (pre-series) which should be collected first. It is really major changes.