From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CD2F8A0526 for ; Tue, 10 Nov 2020 15:04:33 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9C8063772; Tue, 10 Nov 2020 15:04:32 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 78296F90; Tue, 10 Nov 2020 15:04:27 +0100 (CET) IronPort-SDR: Hv6zMeXPHgRb55b0dXdufNw9RspDSCRR++ZhCYjg414fYWIWm8rPeG2nHJBbcRmZX5MJQVxQy0 MYnCQTi9W0IA== X-IronPort-AV: E=McAfee;i="6000,8403,9800"; a="169191273" X-IronPort-AV: E=Sophos;i="5.77,466,1596524400"; d="scan'208";a="169191273" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 06:04:23 -0800 IronPort-SDR: BddnA/L4bTlR04lEwal3QpSLWaSnCL0ldmBrfZEbANv8nnsEQ2O7MWjxF2POpAJjl1zWD+A641 djqoNtXs2T/w== X-IronPort-AV: E=Sophos;i="5.77,466,1596524400"; d="scan'208";a="473434530" Received: from aburakov-mobl.ger.corp.intel.com (HELO [10.251.80.134]) ([10.251.80.134]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 06:04:21 -0800 To: Nithin Dabilpuram Cc: jerinj@marvell.com, dev@dpdk.org, stable@dpdk.org References: <20201012081106.10610-1-ndabilpuram@marvell.com> <20201105090423.11954-1-ndabilpuram@marvell.com> <20201105090423.11954-3-ndabilpuram@marvell.com> From: "Burakov, Anatoly" Message-ID: <89bd0363-7fbf-a1b0-4708-1511dd9cd81f@intel.com> Date: Tue, 10 Nov 2020 14:04:19 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20201105090423.11954-3-ndabilpuram@marvell.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-stable] [PATCH v2 2/3] vfio: fix DMA mapping granularity for type1 iova as va X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" On 05-Nov-20 9:04 AM, Nithin Dabilpuram wrote: > Partial unmapping is not supported for VFIO IOMMU type1 > by kernel. Though kernel gives return as zero, the unmapped size > returned will not be same as expected. So check for > returned unmap size and return error. > > For IOVA as PA, DMA mapping is already at memseg size > granularity. Do the same even for IOVA as VA mode as > DMA map/unmap triggered by heap allocations, > maintain granularity of memseg page size so that heap > expansion and contraction does not have this issue. > > For user requested DMA map/unmap disallow partial unmapping > for VFIO type1. > > Fixes: 73a639085938 ("vfio: allow to map other memory regions") > Cc: anatoly.burakov@intel.com > Cc: stable@dpdk.org > > Signed-off-by: Nithin Dabilpuram > --- Maybe i just didn't have enough coffee today, but i still don't see why this "partial unmap" thing exists. We are already mapping the addresses page-by-page, so surely "partial" unmaps can't even exist in the first place? -- Thanks, Anatoly