From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 008BAA0558;
	Tue, 16 Feb 2021 14:14:44 +0100 (CET)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id D837B16074E;
	Tue, 16 Feb 2021 14:14:43 +0100 (CET)
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by mails.dpdk.org (Postfix) with ESMTP id A402340690
 for <dev@dpdk.org>; Tue, 16 Feb 2021 14:14:41 +0100 (CET)
IronPort-SDR: Kfi9ZmiyuRnLaUsELMSLIp3e8s+J+SF08eMVFztshwaOgWrn38SCqE4j5CBd1m/xZ0bwaRSpoQ
 91/sbbmUj7aw==
X-IronPort-AV: E=McAfee;i="6000,8403,9896"; a="246945201"
X-IronPort-AV: E=Sophos;i="5.81,183,1610438400"; d="scan'208";a="246945201"
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 16 Feb 2021 05:14:40 -0800
IronPort-SDR: KKaBxWwjhUiiSwaEd0nJH1XiTBWJQ51UV9qsEq3rWCEprsL+p/D1jroroeo0fWp4q5ks6oTMEx
 ndQFgLSCOuiw==
X-IronPort-AV: E=Sophos;i="5.81,183,1610438400"; d="scan'208";a="399497617"
Received: from aburakov-mobl.ger.corp.intel.com (HELO [10.213.232.191])
 ([10.213.232.191])
 by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 16 Feb 2021 05:14:39 -0800
To: Nithin Dabilpuram <ndabilpuram@marvell.com>,
 David Christensen <drc@linux.vnet.ibm.com>, david.marchand@redhat.com
Cc: jerinj@marvell.com, dev@dpdk.org
References: <20201012081106.10610-1-ndabilpuram@marvell.com>
 <20210115073243.7025-1-ndabilpuram@marvell.com>
From: "Burakov, Anatoly" <anatoly.burakov@intel.com>
Message-ID: <9fa8f512-c00d-f8bc-5c5e-36581846bc78@intel.com>
Date: Tue, 16 Feb 2021 13:14:37 +0000
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20210115073243.7025-1-ndabilpuram@marvell.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
Subject: Re: [dpdk-dev] [PATCH v8 0/3] fix issue with partial DMA unmap
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

On 15-Jan-21 7:32 AM, Nithin Dabilpuram wrote:
> Partial DMA unmap is not supported by VFIO type1 IOMMU
> in Linux. Though the return value is zero, the returned
> DMA unmap size is not same as expected size.
> So add test case and fix to both heap triggered DMA
> mapping and user triggered DMA mapping/unmapping.
> 
> Refer vfio_dma_do_unmap() in drivers/vfio/vfio_iommu_type1.c
> Snippet of comment is below.
> 
>          /*
>           * vfio-iommu-type1 (v1) - User mappings were coalesced together to
>           * avoid tracking individual mappings.  This means that the granularity
>           * of the original mapping was lost and the user was allowed to attempt
>           * to unmap any range.  Depending on the contiguousness of physical
>           * memory and page sizes supported by the IOMMU, arbitrary unmaps may
>           * or may not have worked.  We only guaranteed unmap granularity
>           * matching the original mapping; even though it was untracked here,
>           * the original mappings are reflected in IOMMU mappings.  This
>           * resulted in a couple unusual behaviors.  First, if a range is not
>           * able to be unmapped, ex. a set of 4k pages that was mapped as a
>           * 2M hugepage into the IOMMU, the unmap ioctl returns success but with
>           * a zero sized unmap.  Also, if an unmap request overlaps the first
>           * address of a hugepage, the IOMMU will unmap the entire hugepage.
>           * This also returns success and the returned unmap size reflects the
>           * actual size unmapped.
> 
>           * We attempt to maintain compatibility with this "v1" interface, but
>           * we take control out of the hands of the IOMMU.  Therefore, an unmap
>           * request offset from the beginning of the original mapping will
>           * return success with zero sized unmap.  And an unmap request covering
>           * the first iova of mapping will unmap the entire range.
> 
> This behavior can be verified by using first patch and add return check for
> dma_unmap.size != len in vfio_type1_dma_mem_map()
> 
> v8:
> - Add cc stable to patch 3/3
> 
> v7:
> - Dropped vfio test case of patch 3/4 i.e
>    "test: add test case to validate VFIO DMA map/unmap"
>    as it couldn't be supported in POWER9 system.
> 
> v6:
> - Fixed issue with x86-32 build introduced by v5.
> 
> v5:
> - Changed vfio test in test_vfio.c to use system pages allocated from
>    heap instead of mmap() so that it comes in range of initially configured
>    window for POWER9 System.
> - Added acked-by from David for 1/4, 2/4.
> 
> v4:
> - Fixed issue with patch 4/4 on x86 builds.
> 
> v3:
> - Fixed external memory test case(4/4) to use system page size
>    instead of 4K.
> - Fixed check-git-log.sh issue and rebased.
> - Added acked-by from anatoly.burakov@intel.com to first 3 patches.
> 
> v2:
> - Reverted earlier commit that enables mergin contiguous mapping for
>    IOVA as PA. (see 1/3)
> - Updated documentation about kernel dma mapping limits and vfio
>    module parameter.
> - Moved vfio test to test_vfio.c and handled comments from
>    Anatoly.
> 
> Nithin Dabilpuram (3):
>    vfio: revert changes for map contiguous areas in one go
>    vfio: fix DMA mapping granularity for type1 IOVA as VA
>    test: change external memory test to use system page sz
> 

Is there anything preventing this from getting merged? Let's try for 
21.05 :)

-- 
Thanks,
Anatoly