From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 44FF7A0526 for ; Tue, 10 Nov 2020 15:22:58 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 01C2E5913; Tue, 10 Nov 2020 15:22:57 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id E5A13F64; Tue, 10 Nov 2020 15:22:52 +0100 (CET) IronPort-SDR: HhV914e/n7ohEMMWIvfFQxxKmGSIApNY3YVCcTXz3h2r9MjEbknClTrsfgTOgYhC4FnyRkqr4I 8s4vKxTeHGNg== X-IronPort-AV: E=McAfee;i="6000,8403,9800"; a="149255295" X-IronPort-AV: E=Sophos;i="5.77,466,1596524400"; d="scan'208";a="149255295" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 06:22:49 -0800 IronPort-SDR: 2p/0XAkMLqnieUWdM0suaUsCIwGFJNC8fVZ2ZMC44EJOMVCltX0Vci9u7T9rROG+L07CjNSh4H osYFr8WxrXDg== X-IronPort-AV: E=Sophos;i="5.77,466,1596524400"; d="scan'208";a="473440225" Received: from aburakov-mobl.ger.corp.intel.com (HELO [10.251.80.134]) ([10.251.80.134]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 06:22:48 -0800 From: "Burakov, Anatoly" To: Nithin Dabilpuram Cc: jerinj@marvell.com, dev@dpdk.org, stable@dpdk.org References: <20201012081106.10610-1-ndabilpuram@marvell.com> <20201105090423.11954-1-ndabilpuram@marvell.com> <20201105090423.11954-3-ndabilpuram@marvell.com> <89bd0363-7fbf-a1b0-4708-1511dd9cd81f@intel.com> Message-ID: <0b78852b-2647-4cfd-21fa-4a17142fdb78@intel.com> Date: Tue, 10 Nov 2020 14:22:45 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <89bd0363-7fbf-a1b0-4708-1511dd9cd81f@intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH v2 2/3] vfio: fix DMA mapping granularity for type1 iova as va X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" On 10-Nov-20 2:04 PM, Burakov, Anatoly wrote: > On 05-Nov-20 9:04 AM, Nithin Dabilpuram wrote: >> Partial unmapping is not supported for VFIO IOMMU type1 >> by kernel. Though kernel gives return as zero, the unmapped size >> returned will not be same as expected. So check for >> returned unmap size and return error. >> >> For IOVA as PA, DMA mapping is already at memseg size >> granularity. Do the same even for IOVA as VA mode as >> DMA map/unmap triggered by heap allocations, >> maintain granularity of memseg page size so that heap >> expansion and contraction does not have this issue. >> >> For user requested DMA map/unmap disallow partial unmapping >> for VFIO type1. >> >> Fixes: 73a639085938 ("vfio: allow to map other memory regions") >> Cc: anatoly.burakov@intel.com >> Cc: stable@dpdk.org >> >> Signed-off-by: Nithin Dabilpuram >> --- > > Maybe i just didn't have enough coffee today, but i still don't see why > this "partial unmap" thing exists. Oh, right, this is for *user* mapped memory. Disregard this email. > > We are already mapping the addresses page-by-page, so surely "partial" > unmaps can't even exist in the first place? > -- Thanks, Anatoly