From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 37F21A04DB for ; Thu, 15 Oct 2020 17:10:57 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2DC6A1EA65; Thu, 15 Oct 2020 17:10:45 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id C17AC1EA56; Thu, 15 Oct 2020 17:10:36 +0200 (CEST) IronPort-SDR: QxyoqUUPz7Z4C2lSux9mvl1/mxebBUeFocXMaPi93+acgW1Pq/rDZUoJSJX7iwlQCXVOGl6jGC NEPJ1ZHZZOhQ== X-IronPort-AV: E=McAfee;i="6000,8403,9775"; a="162904044" X-IronPort-AV: E=Sophos;i="5.77,379,1596524400"; d="scan'208";a="162904044" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Oct 2020 08:10:34 -0700 IronPort-SDR: fAGq1xHC2Jyt9J9o/Bol2Ehif6TI9pc5KtR8OlLBcK3TDf7BWSEzPu8Co6s1EZ/iasZfEHzZAY F79MnmKt5cdw== X-IronPort-AV: E=Sophos;i="5.77,379,1596524400"; d="scan'208";a="521871144" Received: from aburakov-mobl.ger.corp.intel.com (HELO [10.213.243.220]) ([10.213.243.220]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Oct 2020 08:10:33 -0700 To: Nithin Dabilpuram Cc: Nithin Dabilpuram , Jerin Jacob , dev@dpdk.org, stable@dpdk.org References: <20201012081106.10610-1-ndabilpuram@marvell.com> <20201012081106.10610-3-ndabilpuram@marvell.com> <05afb7f5-96bf-dffd-15dd-2024586f7290@intel.com> <20201015060914.GA32207@outlook.office365.com> From: "Burakov, Anatoly" Message-ID: <66b61bda-03a8-d4c4-af9f-0f90a6ef956d@intel.com> Date: Thu, 15 Oct 2020 16:10:31 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-stable] [dpdk-dev] [EXT] Re: [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" On 15-Oct-20 12:57 PM, Nithin Dabilpuram wrote: > On Thu, Oct 15, 2020 at 3:31 PM Burakov, Anatoly > wrote: >> >> On 15-Oct-20 7:09 AM, Nithin Dabilpuram wrote: >>> On Wed, Oct 14, 2020 at 04:07:10PM +0100, Burakov, Anatoly wrote: >>>> External Email >>>> >>>> ---------------------------------------------------------------------- >>>> On 12-Oct-20 9:11 AM, Nithin Dabilpuram wrote: >>>>> Partial unmapping is not supported for VFIO IOMMU type1 >>>>> by kernel. Though kernel gives return as zero, the unmapped size >>>>> returned will not be same as expected. So check for >>>>> returned unmap size and return error. >>>>> >>>>> For case of DMA map/unmap triggered by heap allocations, >>>>> maintain granularity of memseg page size so that heap >>>>> expansion and contraction does not have this issue. >>>> >>>> This is quite unfortunate, because there was a different bug that had to do >>>> with kernel having a very limited number of mappings available [1], as a >>>> result of which the page concatenation code was added. >>>> >>>> It should therefore be documented that the dma_entry_limit parameter should >>>> be adjusted should the user run out of the DMA entries. >>>> >>>> [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_lkml_155414977872.12780.13728555131525362206.stgit-40gimli.home_T_&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=FZ_tPCbgFOh18zwRPO9H0yDx8VW38vuapifdDfc8SFQ&m=3GMg-634_cdUCY4WpQPwjzZ_S4ckuMHOnt2FxyyjXMk&s=TJLzppkaDS95VGyRHX2hzflQfb9XLK0OiOszSXoeXKk&e= >> >> >> >>>>> RTE_LOG(ERR, EAL, " cannot clear DMA remapping, error %i (%s)\n", >>>>> errno, strerror(errno)); >>>>> return -1; >>>>> + } else if (dma_unmap.size != len) { >>>>> + RTE_LOG(ERR, EAL, " unexpected size %"PRIu64" of DMA " >>>>> + "remapping cleared instead of %"PRIu64"\n", >>>>> + (uint64_t)dma_unmap.size, len); >>>>> + rte_errno = EIO; >>>>> + return -1; >>>>> } >>>>> } >>>>> @@ -1853,6 +1869,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, >>>>> /* we're partially unmapping a previously mapped region, so we >>>>> * need to split entry into two. >>>>> */ >>>>> + if (!vfio_cfg->vfio_iommu_type->partial_unmap) { >>>>> + RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n"); >>>>> + rte_errno = ENOTSUP; >>>>> + ret = -1; >>>>> + goto out; >>>>> + } >>>> >>>> How would we ever arrive here if we never do more than 1 page worth of >>>> memory anyway? I don't think this is needed. >>> >>> container_dma_unmap() is called by user via rte_vfio_container_dma_unmap() >>> and when he maps we don't split it as we don't about his memory. >>> So if he maps multiple pages and tries to unmap partially, then we should fail. >> >> Should we map it in page granularity then, instead of adding this >> discrepancy between EAL and user mapping? I.e. instead of adding a >> workaround, how about we just do the same thing for user mem mappings? >> > In heap mapping's we map and unmap it at huge page granularity as we will always > maintain that. > > But here I think we don't know if user's allocation is huge page or > collection of system > pages. Only thing we can do here is map it at system page granularity which > could waste entries if he say really is working with hugepages. Isn't ? > Yeah we do. The API mandates the pages granularity, and it will check against page size and number of IOVA entries, so yes, we do enforce the fact that the IOVA addresses supplied by the user have to be page addresses. -- Thanks, Anatoly