From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 550A1A09DE for ; Wed, 11 Nov 2020 11:00:43 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2CF1D37B4; Wed, 11 Nov 2020 11:00:42 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id BC06FDED; Wed, 11 Nov 2020 11:00:36 +0100 (CET) IronPort-SDR: PoEEMWOsXg0sx2hLSGbVArEV9WeegnZxl1gLqTjyb2/AcOYpwtQ35c94VPAFp4lTmQUqiNHEI+ LM94oEkOT6cQ== X-IronPort-AV: E=McAfee;i="6000,8403,9801"; a="254832204" X-IronPort-AV: E=Sophos;i="5.77,469,1596524400"; d="scan'208";a="254832204" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Nov 2020 02:00:27 -0800 IronPort-SDR: gqf5/iZglslJ/r2gvsbdGFDj8URnCkRa3Fee3qFLEyLHsoKgtCg8ncihLAc5eUxixkhmPiBFgL Yw86tMTX8O/A== X-IronPort-AV: E=Sophos;i="5.77,469,1596524400"; d="scan'208";a="473801118" Received: from jmpriorx-mobl.ger.corp.intel.com (HELO [10.213.193.174]) ([10.213.193.174]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Nov 2020 02:00:25 -0800 To: Nithin Dabilpuram Cc: jerinj@marvell.com, dev@dpdk.org, stable@dpdk.org References: <20201012081106.10610-1-ndabilpuram@marvell.com> <20201105090423.11954-1-ndabilpuram@marvell.com> <20201105090423.11954-3-ndabilpuram@marvell.com> <2d2b628e-be4c-0abb-6fb0-9bf98d28cc26@intel.com> From: "Burakov, Anatoly" Message-ID: Date: Wed, 11 Nov 2020 10:00:21 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH v2 2/3] vfio: fix DMA mapping granularity for type1 iova as va X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" On 11-Nov-20 5:08 AM, Nithin Dabilpuram wrote: > On Tue, Nov 10, 2020 at 02:17:39PM +0000, Burakov, Anatoly wrote: >> On 05-Nov-20 9:04 AM, Nithin Dabilpuram wrote: >>> Partial unmapping is not supported for VFIO IOMMU type1 >>> by kernel. Though kernel gives return as zero, the unmapped size >>> returned will not be same as expected. So check for >>> returned unmap size and return error. >>> >>> For IOVA as PA, DMA mapping is already at memseg size >>> granularity. Do the same even for IOVA as VA mode as >>> DMA map/unmap triggered by heap allocations, >>> maintain granularity of memseg page size so that heap >>> expansion and contraction does not have this issue. >>> >>> For user requested DMA map/unmap disallow partial unmapping >>> for VFIO type1. >>> >>> Fixes: 73a639085938 ("vfio: allow to map other memory regions") >>> Cc: anatoly.burakov@intel.com >>> Cc: stable@dpdk.org >>> >>> Signed-off-by: Nithin Dabilpuram >>> --- >> >> >> >>> @@ -525,12 +528,19 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len, >>> /* for IOVA as VA mode, no need to care for IOVA addresses */ >>> if (rte_eal_iova_mode() == RTE_IOVA_VA && msl->external == 0) { >>> uint64_t vfio_va = (uint64_t)(uintptr_t)addr; >>> - if (type == RTE_MEM_EVENT_ALLOC) >>> - vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va, >>> - len, 1); >>> - else >>> - vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va, >>> - len, 0); >>> + uint64_t page_sz = msl->page_sz; >>> + >>> + /* Maintain granularity of DMA map/unmap to memseg size */ >>> + for (; cur_len < len; cur_len += page_sz) { >>> + if (type == RTE_MEM_EVENT_ALLOC) >>> + vfio_dma_mem_map(default_vfio_cfg, vfio_va, >>> + vfio_va, page_sz, 1); >>> + else >>> + vfio_dma_mem_map(default_vfio_cfg, vfio_va, >>> + vfio_va, page_sz, 0); >> >> I think you're mapping the same address here, over and over. Perhaps you >> meant `vfio_va + cur_len` for the mapping addresses? > > There is a 'vfio_va += page_sz;' in next line right ? >> >> -- >> Thanks, >> Anatoly Oh, right, my apologies. I did need more coffee :D Acked-by: Anatoly Burakov -- Thanks, Anatoly