From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F351AA04B1 for ; Tue, 10 Nov 2020 15:17:52 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BAF9E3772; Tue, 10 Nov 2020 15:17:51 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 6FBBCF90; Tue, 10 Nov 2020 15:17:47 +0100 (CET) IronPort-SDR: eNxzK2vJJRApUn15Jxtxm5FS0TTWAP+rFgAOEEcZhcTR1i6WbAPfK38waIKBIgM/bNkkTxVG6y Md3YmgeOYlfw== X-IronPort-AV: E=McAfee;i="6000,8403,9800"; a="166468375" X-IronPort-AV: E=Sophos;i="5.77,466,1596524400"; d="scan'208";a="166468375" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 06:17:42 -0800 IronPort-SDR: PuFbQQXUNfGF+q9W7cuSx3msDTNlIxux9xHWz4SfETp7cHFnYg2ySHfOtlKBDo9wNNbKOe7vhv aVIU6T5VYr/w== X-IronPort-AV: E=Sophos;i="5.77,466,1596524400"; d="scan'208";a="473438997" Received: from aburakov-mobl.ger.corp.intel.com (HELO [10.251.80.134]) ([10.251.80.134]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 06:17:41 -0800 To: Nithin Dabilpuram Cc: jerinj@marvell.com, dev@dpdk.org, stable@dpdk.org References: <20201012081106.10610-1-ndabilpuram@marvell.com> <20201105090423.11954-1-ndabilpuram@marvell.com> <20201105090423.11954-3-ndabilpuram@marvell.com> From: "Burakov, Anatoly" Message-ID: <2d2b628e-be4c-0abb-6fb0-9bf98d28cc26@intel.com> Date: Tue, 10 Nov 2020 14:17:39 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20201105090423.11954-3-ndabilpuram@marvell.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-stable] [PATCH v2 2/3] vfio: fix DMA mapping granularity for type1 iova as va X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" On 05-Nov-20 9:04 AM, Nithin Dabilpuram wrote: > Partial unmapping is not supported for VFIO IOMMU type1 > by kernel. Though kernel gives return as zero, the unmapped size > returned will not be same as expected. So check for > returned unmap size and return error. > > For IOVA as PA, DMA mapping is already at memseg size > granularity. Do the same even for IOVA as VA mode as > DMA map/unmap triggered by heap allocations, > maintain granularity of memseg page size so that heap > expansion and contraction does not have this issue. > > For user requested DMA map/unmap disallow partial unmapping > for VFIO type1. > > Fixes: 73a639085938 ("vfio: allow to map other memory regions") > Cc: anatoly.burakov@intel.com > Cc: stable@dpdk.org > > Signed-off-by: Nithin Dabilpuram > --- > @@ -525,12 +528,19 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len, > /* for IOVA as VA mode, no need to care for IOVA addresses */ > if (rte_eal_iova_mode() == RTE_IOVA_VA && msl->external == 0) { > uint64_t vfio_va = (uint64_t)(uintptr_t)addr; > - if (type == RTE_MEM_EVENT_ALLOC) > - vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va, > - len, 1); > - else > - vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va, > - len, 0); > + uint64_t page_sz = msl->page_sz; > + > + /* Maintain granularity of DMA map/unmap to memseg size */ > + for (; cur_len < len; cur_len += page_sz) { > + if (type == RTE_MEM_EVENT_ALLOC) > + vfio_dma_mem_map(default_vfio_cfg, vfio_va, > + vfio_va, page_sz, 1); > + else > + vfio_dma_mem_map(default_vfio_cfg, vfio_va, > + vfio_va, page_sz, 0); I think you're mapping the same address here, over and over. Perhaps you meant `vfio_va + cur_len` for the mapping addresses? -- Thanks, Anatoly