DPDK patches and discussions
 help / color / Atom feed
* [dpdk-dev] [PATCH 0/2] fix issue with partial DMA unmap
@ 2020-10-12  8:11 Nithin Dabilpuram
  2020-10-12  8:11 ` [dpdk-dev] [PATCH 1/2] test: add test case to validate VFIO DMA map/unmap Nithin Dabilpuram
  2020-10-12  8:11 ` [dpdk-dev] [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1 Nithin Dabilpuram
  0 siblings, 2 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2020-10-12  8:11 UTC (permalink / raw)
  To: anatoly.burakov; +Cc: jerinj, dev, Nithin Dabilpuram

Partial DMA unmap is not supported by VFIO type1 IOMMU
in Linux. Though the return value is zero, the returned
DMA unmap size is not same as expected size.
So add test case and fixe to both heap triggered DMA
mapping and user triggered DMA mapping/unmapping.

Refer vfio_dma_do_unmap() in drivers/vfio/vfio_iommu_type1.c
Snippet of comment is below.

        /*
         * vfio-iommu-type1 (v1) - User mappings were coalesced together to
         * avoid tracking individual mappings.  This means that the granularity
         * of the original mapping was lost and the user was allowed to attempt
         * to unmap any range.  Depending on the contiguousness of physical
         * memory and page sizes supported by the IOMMU, arbitrary unmaps may
         * or may not have worked.  We only guaranteed unmap granularity
         * matching the original mapping; even though it was untracked here,
         * the original mappings are reflected in IOMMU mappings.  This
         * resulted in a couple unusual behaviors.  First, if a range is not
         * able to be unmapped, ex. a set of 4k pages that was mapped as a
         * 2M hugepage into the IOMMU, the unmap ioctl returns success but with
         * a zero sized unmap.  Also, if an unmap request overlaps the first
         * address of a hugepage, the IOMMU will unmap the entire hugepage.
         * This also returns success and the returned unmap size reflects the
         * actual size unmapped.

         * We attempt to maintain compatibility with this "v1" interface, but  
         * we take control out of the hands of the IOMMU.  Therefore, an unmap 
         * request offset from the beginning of the original mapping will      
         * return success with zero sized unmap.  And an unmap request covering
         * the first iova of mapping will unmap the entire range.              

This behavior can be verified by using first patch and add return check for
dma_unmap.size != len in vfio_type1_dma_mem_map()

Nithin Dabilpuram (2):
  test: add test case to validate VFIO DMA map/unmap
  vfio: fix partial DMA unmapping for VFIO type1

 app/test/test_memory.c          | 79 +++++++++++++++++++++++++++++++++++++++++
 lib/librte_eal/linux/eal_vfio.c | 34 ++++++++++++++----
 lib/librte_eal/linux/eal_vfio.h |  1 +
 3 files changed, 108 insertions(+), 6 deletions(-)

-- 
2.8.4


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] [PATCH 1/2] test: add test case to validate VFIO DMA map/unmap
  2020-10-12  8:11 [dpdk-dev] [PATCH 0/2] fix issue with partial DMA unmap Nithin Dabilpuram
@ 2020-10-12  8:11 ` Nithin Dabilpuram
  2020-10-14 14:39   ` Burakov, Anatoly
  2020-10-12  8:11 ` [dpdk-dev] [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1 Nithin Dabilpuram
  1 sibling, 1 reply; 16+ messages in thread
From: Nithin Dabilpuram @ 2020-10-12  8:11 UTC (permalink / raw)
  To: anatoly.burakov; +Cc: jerinj, dev, Nithin Dabilpuram

Add test case in test_memory to test VFIO DMA map/unmap.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 app/test/test_memory.c | 79 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

diff --git a/app/test/test_memory.c b/app/test/test_memory.c
index 7d5ae99..1c56455 100644
--- a/app/test/test_memory.c
+++ b/app/test/test_memory.c
@@ -4,11 +4,16 @@
 
 #include <stdio.h>
 #include <stdint.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <unistd.h>
 
 #include <rte_eal.h>
+#include <rte_errno.h>
 #include <rte_memory.h>
 #include <rte_common.h>
 #include <rte_memzone.h>
+#include <rte_vfio.h>
 
 #include "test.h"
 
@@ -70,6 +75,71 @@ check_seg_fds(const struct rte_memseg_list *msl, const struct rte_memseg *ms,
 }
 
 static int
+test_memory_vfio_dma_map(void)
+{
+	uint64_t sz = 2 * sysconf(_SC_PAGESIZE), sz1, sz2;
+	uint64_t unmap1, unmap2;
+	uint8_t *mem;
+	int ret;
+
+	/* Check if vfio is enabled in both kernel and eal */
+	ret = rte_vfio_is_enabled("vfio");
+	if (!ret)
+		return 1;
+
+	/* Allocate twice size of page */
+	mem = mmap(NULL, sz, PROT_READ | PROT_WRITE,
+		   MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+	if (mem == MAP_FAILED) {
+		printf("Failed to allocate memory for external heap\n");
+		return -1;
+	}
+
+	/* Force page allocation */
+	memset(mem, 0, sz);
+
+	/* map the whole region */
+	ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD,
+					 (uint64_t)mem, (rte_iova_t)mem, sz);
+	if (ret) {
+		printf("Failed to dma map whole region, ret=%d\n", ret);
+		goto fail;
+	}
+
+	unmap1 = (uint64_t)mem + (sz / 2);
+	sz1 = sz / 2;
+	unmap2 = (uint64_t)mem;
+	sz2 = sz / 2;
+	/* unmap the partial region */
+	ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD,
+					   unmap1, (rte_iova_t)unmap1, sz1);
+	if (ret) {
+		if (rte_errno == ENOTSUP) {
+			printf("Partial dma unmap not supported\n");
+			unmap2 = (uint64_t)mem;
+			sz2 = sz;
+		} else {
+			printf("Failed to unmap send half region, ret=%d(%d)\n",
+			       ret, rte_errno);
+			goto fail;
+		}
+	}
+
+	/* unmap the remaining region */
+	ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD,
+					   unmap2, (rte_iova_t)unmap2, sz2);
+	if (ret) {
+		printf("Failed to unmap remaining region, ret=%d(%d)\n", ret,
+		       rte_errno);
+		goto fail;
+	}
+
+fail:
+	munmap(mem, sz);
+	return ret;
+}
+
+static int
 test_memory(void)
 {
 	uint64_t s;
@@ -101,6 +171,15 @@ test_memory(void)
 		return -1;
 	}
 
+	/* test for vfio dma map/unmap */
+	ret = test_memory_vfio_dma_map();
+	if (ret == 1) {
+		printf("VFIO dma map/unmap unsupported\n");
+	} else if (ret < 0) {
+		printf("Error vfio dma map/unmap, ret=%d\n", ret);
+		return -1;
+	}
+
 	return 0;
 }
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1
  2020-10-12  8:11 [dpdk-dev] [PATCH 0/2] fix issue with partial DMA unmap Nithin Dabilpuram
  2020-10-12  8:11 ` [dpdk-dev] [PATCH 1/2] test: add test case to validate VFIO DMA map/unmap Nithin Dabilpuram
@ 2020-10-12  8:11 ` Nithin Dabilpuram
  2020-10-14 15:07   ` Burakov, Anatoly
  1 sibling, 1 reply; 16+ messages in thread
From: Nithin Dabilpuram @ 2020-10-12  8:11 UTC (permalink / raw)
  To: anatoly.burakov; +Cc: jerinj, dev, Nithin Dabilpuram, stable

Partial unmapping is not supported for VFIO IOMMU type1
by kernel. Though kernel gives return as zero, the unmapped size
returned will not be same as expected. So check for
returned unmap size and return error.

For case of DMA map/unmap triggered by heap allocations,
maintain granularity of memseg page size so that heap
expansion and contraction does not have this issue.

For user requested DMA map/unmap disallow partial unmapping
for VFIO type1.

Fixes: 73a639085938 ("vfio: allow to map other memory regions")
Cc: anatoly.burakov@intel.com
Cc: stable@dpdk.org

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 lib/librte_eal/linux/eal_vfio.c | 34 ++++++++++++++++++++++++++++------
 lib/librte_eal/linux/eal_vfio.h |  1 +
 2 files changed, 29 insertions(+), 6 deletions(-)

diff --git a/lib/librte_eal/linux/eal_vfio.c b/lib/librte_eal/linux/eal_vfio.c
index d26e164..ef95259 100644
--- a/lib/librte_eal/linux/eal_vfio.c
+++ b/lib/librte_eal/linux/eal_vfio.c
@@ -69,6 +69,7 @@ static const struct vfio_iommu_type iommu_types[] = {
 	{
 		.type_id = RTE_VFIO_TYPE1,
 		.name = "Type 1",
+		.partial_unmap = false,
 		.dma_map_func = &vfio_type1_dma_map,
 		.dma_user_map_func = &vfio_type1_dma_mem_map
 	},
@@ -76,6 +77,7 @@ static const struct vfio_iommu_type iommu_types[] = {
 	{
 		.type_id = RTE_VFIO_SPAPR,
 		.name = "sPAPR",
+		.partial_unmap = true,
 		.dma_map_func = &vfio_spapr_dma_map,
 		.dma_user_map_func = &vfio_spapr_dma_mem_map
 	},
@@ -83,6 +85,7 @@ static const struct vfio_iommu_type iommu_types[] = {
 	{
 		.type_id = RTE_VFIO_NOIOMMU,
 		.name = "No-IOMMU",
+		.partial_unmap = true,
 		.dma_map_func = &vfio_noiommu_dma_map,
 		.dma_user_map_func = &vfio_noiommu_dma_mem_map
 	},
@@ -525,12 +528,19 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len,
 	/* for IOVA as VA mode, no need to care for IOVA addresses */
 	if (rte_eal_iova_mode() == RTE_IOVA_VA && msl->external == 0) {
 		uint64_t vfio_va = (uint64_t)(uintptr_t)addr;
-		if (type == RTE_MEM_EVENT_ALLOC)
-			vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va,
-					len, 1);
-		else
-			vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va,
-					len, 0);
+		uint64_t page_sz = msl->page_sz;
+
+		/* Maintain granularity of DMA map/unmap to memseg size */
+		for (; cur_len < len; cur_len += page_sz) {
+			if (type == RTE_MEM_EVENT_ALLOC)
+				vfio_dma_mem_map(default_vfio_cfg, vfio_va,
+						 vfio_va, page_sz, 1);
+			else
+				vfio_dma_mem_map(default_vfio_cfg, vfio_va,
+						 vfio_va, page_sz, 0);
+			vfio_va += page_sz;
+		}
+
 		return;
 	}
 
@@ -1383,6 +1393,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova,
 			RTE_LOG(ERR, EAL, "  cannot clear DMA remapping, error %i (%s)\n",
 					errno, strerror(errno));
 			return -1;
+		} else if (dma_unmap.size != len) {
+			RTE_LOG(ERR, EAL, "  unexpected size %"PRIu64" of DMA "
+				"remapping cleared instead of %"PRIu64"\n",
+				(uint64_t)dma_unmap.size, len);
+			rte_errno = EIO;
+			return -1;
 		}
 	}
 
@@ -1853,6 +1869,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
 		/* we're partially unmapping a previously mapped region, so we
 		 * need to split entry into two.
 		 */
+		if (!vfio_cfg->vfio_iommu_type->partial_unmap) {
+			RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n");
+			rte_errno = ENOTSUP;
+			ret = -1;
+			goto out;
+		}
 		if (user_mem_maps->n_maps == VFIO_MAX_USER_MEM_MAPS) {
 			RTE_LOG(ERR, EAL, "Not enough space to store partial mapping\n");
 			rte_errno = ENOMEM;
diff --git a/lib/librte_eal/linux/eal_vfio.h b/lib/librte_eal/linux/eal_vfio.h
index cb2d35f..6ebaca6 100644
--- a/lib/librte_eal/linux/eal_vfio.h
+++ b/lib/librte_eal/linux/eal_vfio.h
@@ -113,6 +113,7 @@ typedef int (*vfio_dma_user_func_t)(int fd, uint64_t vaddr, uint64_t iova,
 struct vfio_iommu_type {
 	int type_id;
 	const char *name;
+	bool partial_unmap;
 	vfio_dma_user_func_t dma_user_map_func;
 	vfio_dma_func_t dma_map_func;
 };
-- 
2.8.4


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH 1/2] test: add test case to validate VFIO DMA map/unmap
  2020-10-12  8:11 ` [dpdk-dev] [PATCH 1/2] test: add test case to validate VFIO DMA map/unmap Nithin Dabilpuram
@ 2020-10-14 14:39   ` Burakov, Anatoly
  2020-10-15  9:54     ` [dpdk-dev] [EXT] " Nithin Dabilpuram
  0 siblings, 1 reply; 16+ messages in thread
From: Burakov, Anatoly @ 2020-10-14 14:39 UTC (permalink / raw)
  To: Nithin Dabilpuram; +Cc: jerinj, dev

On 12-Oct-20 9:11 AM, Nithin Dabilpuram wrote:
> Add test case in test_memory to test VFIO DMA map/unmap.
> 
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> ---
>   app/test/test_memory.c | 79 ++++++++++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 79 insertions(+)
> 
> diff --git a/app/test/test_memory.c b/app/test/test_memory.c
> index 7d5ae99..1c56455 100644
> --- a/app/test/test_memory.c
> +++ b/app/test/test_memory.c
> @@ -4,11 +4,16 @@
>   
>   #include <stdio.h>
>   #include <stdint.h>
> +#include <string.h>
> +#include <sys/mman.h>
> +#include <unistd.h>
>   
>   #include <rte_eal.h>
> +#include <rte_errno.h>
>   #include <rte_memory.h>
>   #include <rte_common.h>
>   #include <rte_memzone.h>
> +#include <rte_vfio.h>
>   
>   #include "test.h"
>   
> @@ -70,6 +75,71 @@ check_seg_fds(const struct rte_memseg_list *msl, const struct rte_memseg *ms,
>   }
>   
>   static int
> +test_memory_vfio_dma_map(void)
> +{
> +	uint64_t sz = 2 * sysconf(_SC_PAGESIZE), sz1, sz2;

i think we now have a function for that, rte_page_size() ?

Also, i would prefer

uint64_t sz1, sz2, sz = 2 * rte_page_size();

Easier to parse IMO.

> +	uint64_t unmap1, unmap2;
> +	uint8_t *mem;
> +	int ret;
> +
> +	/* Check if vfio is enabled in both kernel and eal */
> +	ret = rte_vfio_is_enabled("vfio");
> +	if (!ret)
> +		return 1;

No need, rte_vfio_container_dma_map() should set errno to ENODEV if vfio 
is not enabled.

> +
> +	/* Allocate twice size of page */
> +	mem = mmap(NULL, sz, PROT_READ | PROT_WRITE,
> +		   MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> +	if (mem == MAP_FAILED) {
> +		printf("Failed to allocate memory for external heap\n");
> +		return -1;
> +	}
> +
> +	/* Force page allocation */
> +	memset(mem, 0, sz);
> +
> +	/* map the whole region */
> +	ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD,
> +					 (uint64_t)mem, (rte_iova_t)mem, sz);

should be (uintptr_t) perhaps?

Also, this can return -1 with rte_errno == ENOTSUP, i think this happens 
if there are no devices attached (or if there's no VFIO support, like it 
would be on FreeBSD or Windows).

> +	if (ret) {
> +		printf("Failed to dma map whole region, ret=%d\n", ret);
> +		goto fail;
> +	}
> +
> +	unmap1 = (uint64_t)mem + (sz / 2);
> +	sz1 = sz / 2;
> +	unmap2 = (uint64_t)mem;
> +	sz2 = sz / 2;
> +	/* unmap the partial region */
> +	ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD,
> +					   unmap1, (rte_iova_t)unmap1, sz1);
> +	if (ret) {
> +		if (rte_errno == ENOTSUP) {
> +			printf("Partial dma unmap not supported\n");
> +			unmap2 = (uint64_t)mem;
> +			sz2 = sz;
> +		} else {
> +			printf("Failed to unmap send half region, ret=%d(%d)\n",

I think "send half" is a typo? Also, here and in other places, i would 
prefer a rte_strerror() instead of raw rte_errno number.

> +			       ret, rte_errno);
> +			goto fail;
> +		}
> +	}
> +
> +	/* unmap the remaining region */
> +	ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD,
> +					   unmap2, (rte_iova_t)unmap2, sz2);
> +	if (ret) {
> +		printf("Failed to unmap remaining region, ret=%d(%d)\n", ret,
> +		       rte_errno);
> +		goto fail;
> +	}
> +
> +fail:
> +	munmap(mem, sz);
> +	return ret;
> +}
> +
> +static int
>   test_memory(void)
>   {
>   	uint64_t s;
> @@ -101,6 +171,15 @@ test_memory(void)
>   		return -1;
>   	}
>   
> +	/* test for vfio dma map/unmap */
> +	ret = test_memory_vfio_dma_map();
> +	if (ret == 1) {
> +		printf("VFIO dma map/unmap unsupported\n");
> +	} else if (ret < 0) {
> +		printf("Error vfio dma map/unmap, ret=%d\n", ret);
> +		return -1;
> +	}
> +

This looks odd in this autotest. Perhaps create a new autotest for VFIO?

>   	return 0;
>   }
>   
> 


-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1
  2020-10-12  8:11 ` [dpdk-dev] [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1 Nithin Dabilpuram
@ 2020-10-14 15:07   ` Burakov, Anatoly
  2020-10-15  6:09     ` [dpdk-dev] [EXT] " Nithin Dabilpuram
  0 siblings, 1 reply; 16+ messages in thread
From: Burakov, Anatoly @ 2020-10-14 15:07 UTC (permalink / raw)
  To: Nithin Dabilpuram; +Cc: jerinj, dev, stable

On 12-Oct-20 9:11 AM, Nithin Dabilpuram wrote:
> Partial unmapping is not supported for VFIO IOMMU type1
> by kernel. Though kernel gives return as zero, the unmapped size
> returned will not be same as expected. So check for
> returned unmap size and return error.
> 
> For case of DMA map/unmap triggered by heap allocations,
> maintain granularity of memseg page size so that heap
> expansion and contraction does not have this issue.

This is quite unfortunate, because there was a different bug that had to 
do with kernel having a very limited number of mappings available [1], 
as a result of which the page concatenation code was added.

It should therefore be documented that the dma_entry_limit parameter 
should be adjusted should the user run out of the DMA entries.

[1] 
https://lore.kernel.org/lkml/155414977872.12780.13728555131525362206.stgit@gimli.home/T/

> 
> For user requested DMA map/unmap disallow partial unmapping
> for VFIO type1.
> 
> Fixes: 73a639085938 ("vfio: allow to map other memory regions")
> Cc: anatoly.burakov@intel.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> ---
>   lib/librte_eal/linux/eal_vfio.c | 34 ++++++++++++++++++++++++++++------
>   lib/librte_eal/linux/eal_vfio.h |  1 +
>   2 files changed, 29 insertions(+), 6 deletions(-)
> 
> diff --git a/lib/librte_eal/linux/eal_vfio.c b/lib/librte_eal/linux/eal_vfio.c
> index d26e164..ef95259 100644
> --- a/lib/librte_eal/linux/eal_vfio.c
> +++ b/lib/librte_eal/linux/eal_vfio.c
> @@ -69,6 +69,7 @@ static const struct vfio_iommu_type iommu_types[] = {
>   	{
>   		.type_id = RTE_VFIO_TYPE1,
>   		.name = "Type 1",
> +		.partial_unmap = false,
>   		.dma_map_func = &vfio_type1_dma_map,
>   		.dma_user_map_func = &vfio_type1_dma_mem_map
>   	},
> @@ -76,6 +77,7 @@ static const struct vfio_iommu_type iommu_types[] = {
>   	{
>   		.type_id = RTE_VFIO_SPAPR,
>   		.name = "sPAPR",
> +		.partial_unmap = true,
>   		.dma_map_func = &vfio_spapr_dma_map,
>   		.dma_user_map_func = &vfio_spapr_dma_mem_map
>   	},
> @@ -83,6 +85,7 @@ static const struct vfio_iommu_type iommu_types[] = {
>   	{
>   		.type_id = RTE_VFIO_NOIOMMU,
>   		.name = "No-IOMMU",
> +		.partial_unmap = true,
>   		.dma_map_func = &vfio_noiommu_dma_map,
>   		.dma_user_map_func = &vfio_noiommu_dma_mem_map
>   	},
> @@ -525,12 +528,19 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len,
>   	/* for IOVA as VA mode, no need to care for IOVA addresses */
>   	if (rte_eal_iova_mode() == RTE_IOVA_VA && msl->external == 0) {
>   		uint64_t vfio_va = (uint64_t)(uintptr_t)addr;
> -		if (type == RTE_MEM_EVENT_ALLOC)
> -			vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va,
> -					len, 1);
> -		else
> -			vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va,
> -					len, 0);
> +		uint64_t page_sz = msl->page_sz;
> +
> +		/* Maintain granularity of DMA map/unmap to memseg size */
> +		for (; cur_len < len; cur_len += page_sz) {
> +			if (type == RTE_MEM_EVENT_ALLOC)
> +				vfio_dma_mem_map(default_vfio_cfg, vfio_va,
> +						 vfio_va, page_sz, 1);
> +			else
> +				vfio_dma_mem_map(default_vfio_cfg, vfio_va,
> +						 vfio_va, page_sz, 0);
> +			vfio_va += page_sz;
> +		}
> +

You'd also have to revert d1c7c0cdf7bac5eb40d3a2a690453aefeee5887b 
because currently the PA path will opportunistically concantenate 
contiguous segments into single mapping too.

>   		return;
>   	}
>   
> @@ -1383,6 +1393,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova,
>   			RTE_LOG(ERR, EAL, "  cannot clear DMA remapping, error %i (%s)\n",
>   					errno, strerror(errno));
>   			return -1;
> +		} else if (dma_unmap.size != len) {
> +			RTE_LOG(ERR, EAL, "  unexpected size %"PRIu64" of DMA "
> +				"remapping cleared instead of %"PRIu64"\n",
> +				(uint64_t)dma_unmap.size, len);
> +			rte_errno = EIO;
> +			return -1;
>   		}
>   	}
>   
> @@ -1853,6 +1869,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
>   		/* we're partially unmapping a previously mapped region, so we
>   		 * need to split entry into two.
>   		 */
> +		if (!vfio_cfg->vfio_iommu_type->partial_unmap) {
> +			RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n");
> +			rte_errno = ENOTSUP;
> +			ret = -1;
> +			goto out;
> +		}

How would we ever arrive here if we never do more than 1 page worth of 
memory anyway? I don't think this is needed.

>   		if (user_mem_maps->n_maps == VFIO_MAX_USER_MEM_MAPS) {
>   			RTE_LOG(ERR, EAL, "Not enough space to store partial mapping\n");
>   			rte_errno = ENOMEM;
> diff --git a/lib/librte_eal/linux/eal_vfio.h b/lib/librte_eal/linux/eal_vfio.h
> index cb2d35f..6ebaca6 100644
> --- a/lib/librte_eal/linux/eal_vfio.h
> +++ b/lib/librte_eal/linux/eal_vfio.h
> @@ -113,6 +113,7 @@ typedef int (*vfio_dma_user_func_t)(int fd, uint64_t vaddr, uint64_t iova,
>   struct vfio_iommu_type {
>   	int type_id;
>   	const char *name;
> +	bool partial_unmap;
>   	vfio_dma_user_func_t dma_user_map_func;
>   	vfio_dma_func_t dma_map_func;
>   };
> 


-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1
  2020-10-14 15:07   ` Burakov, Anatoly
@ 2020-10-15  6:09     ` " Nithin Dabilpuram
  2020-10-15 10:00       ` Burakov, Anatoly
  0 siblings, 1 reply; 16+ messages in thread
From: Nithin Dabilpuram @ 2020-10-15  6:09 UTC (permalink / raw)
  To: Burakov, Anatoly; +Cc: jerinj, dev, stable

On Wed, Oct 14, 2020 at 04:07:10PM +0100, Burakov, Anatoly wrote:
> External Email
> 
> ----------------------------------------------------------------------
> On 12-Oct-20 9:11 AM, Nithin Dabilpuram wrote:
> > Partial unmapping is not supported for VFIO IOMMU type1
> > by kernel. Though kernel gives return as zero, the unmapped size
> > returned will not be same as expected. So check for
> > returned unmap size and return error.
> > 
> > For case of DMA map/unmap triggered by heap allocations,
> > maintain granularity of memseg page size so that heap
> > expansion and contraction does not have this issue.
> 
> This is quite unfortunate, because there was a different bug that had to do
> with kernel having a very limited number of mappings available [1], as a
> result of which the page concatenation code was added.
> 
> It should therefore be documented that the dma_entry_limit parameter should
> be adjusted should the user run out of the DMA entries.
> 
> [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_lkml_155414977872.12780.13728555131525362206.stgit-40gimli.home_T_&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=FZ_tPCbgFOh18zwRPO9H0yDx8VW38vuapifdDfc8SFQ&m=3GMg-634_cdUCY4WpQPwjzZ_S4ckuMHOnt2FxyyjXMk&s=TJLzppkaDS95VGyRHX2hzflQfb9XLK0OiOszSXoeXKk&e=

Ack, I'll document it in guides/linux_gsg/linux_drivers.rst in vfio section.
> 
> > 
> > For user requested DMA map/unmap disallow partial unmapping
> > for VFIO type1.
> > 
> > Fixes: 73a639085938 ("vfio: allow to map other memory regions")
> > Cc: anatoly.burakov@intel.com
> > Cc: stable@dpdk.org
> > 
> > Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> > ---
> >   lib/librte_eal/linux/eal_vfio.c | 34 ++++++++++++++++++++++++++++------
> >   lib/librte_eal/linux/eal_vfio.h |  1 +
> >   2 files changed, 29 insertions(+), 6 deletions(-)
> > 
> > diff --git a/lib/librte_eal/linux/eal_vfio.c b/lib/librte_eal/linux/eal_vfio.c
> > index d26e164..ef95259 100644
> > --- a/lib/librte_eal/linux/eal_vfio.c
> > +++ b/lib/librte_eal/linux/eal_vfio.c
> > @@ -69,6 +69,7 @@ static const struct vfio_iommu_type iommu_types[] = {
> >   	{
> >   		.type_id = RTE_VFIO_TYPE1,
> >   		.name = "Type 1",
> > +		.partial_unmap = false,
> >   		.dma_map_func = &vfio_type1_dma_map,
> >   		.dma_user_map_func = &vfio_type1_dma_mem_map
> >   	},
> > @@ -76,6 +77,7 @@ static const struct vfio_iommu_type iommu_types[] = {
> >   	{
> >   		.type_id = RTE_VFIO_SPAPR,
> >   		.name = "sPAPR",
> > +		.partial_unmap = true,
> >   		.dma_map_func = &vfio_spapr_dma_map,
> >   		.dma_user_map_func = &vfio_spapr_dma_mem_map
> >   	},
> > @@ -83,6 +85,7 @@ static const struct vfio_iommu_type iommu_types[] = {
> >   	{
> >   		.type_id = RTE_VFIO_NOIOMMU,
> >   		.name = "No-IOMMU",
> > +		.partial_unmap = true,
> >   		.dma_map_func = &vfio_noiommu_dma_map,
> >   		.dma_user_map_func = &vfio_noiommu_dma_mem_map
> >   	},
> > @@ -525,12 +528,19 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len,
> >   	/* for IOVA as VA mode, no need to care for IOVA addresses */
> >   	if (rte_eal_iova_mode() == RTE_IOVA_VA && msl->external == 0) {
> >   		uint64_t vfio_va = (uint64_t)(uintptr_t)addr;
> > -		if (type == RTE_MEM_EVENT_ALLOC)
> > -			vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va,
> > -					len, 1);
> > -		else
> > -			vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va,
> > -					len, 0);
> > +		uint64_t page_sz = msl->page_sz;
> > +
> > +		/* Maintain granularity of DMA map/unmap to memseg size */
> > +		for (; cur_len < len; cur_len += page_sz) {
> > +			if (type == RTE_MEM_EVENT_ALLOC)
> > +				vfio_dma_mem_map(default_vfio_cfg, vfio_va,
> > +						 vfio_va, page_sz, 1);
> > +			else
> > +				vfio_dma_mem_map(default_vfio_cfg, vfio_va,
> > +						 vfio_va, page_sz, 0);
> > +			vfio_va += page_sz;
> > +		}
> > +
> 
> You'd also have to revert d1c7c0cdf7bac5eb40d3a2a690453aefeee5887b because
> currently the PA path will opportunistically concantenate contiguous
> segments into single mapping too.

Ack, I'll change it even for IOVA as PA mode. I missed that.
> 
> >   		return;
> >   	}
> > @@ -1383,6 +1393,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova,
> >   			RTE_LOG(ERR, EAL, "  cannot clear DMA remapping, error %i (%s)\n",
> >   					errno, strerror(errno));
> >   			return -1;
> > +		} else if (dma_unmap.size != len) {
> > +			RTE_LOG(ERR, EAL, "  unexpected size %"PRIu64" of DMA "
> > +				"remapping cleared instead of %"PRIu64"\n",
> > +				(uint64_t)dma_unmap.size, len);
> > +			rte_errno = EIO;
> > +			return -1;
> >   		}
> >   	}
> > @@ -1853,6 +1869,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
> >   		/* we're partially unmapping a previously mapped region, so we
> >   		 * need to split entry into two.
> >   		 */
> > +		if (!vfio_cfg->vfio_iommu_type->partial_unmap) {
> > +			RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n");
> > +			rte_errno = ENOTSUP;
> > +			ret = -1;
> > +			goto out;
> > +		}
> 
> How would we ever arrive here if we never do more than 1 page worth of
> memory anyway? I don't think this is needed.

container_dma_unmap() is called by user via rte_vfio_container_dma_unmap() 
and when he maps we don't split it as we don't about his memory.
So if he maps multiple pages and tries to unmap partially, then we should fail.

> 
> >   		if (user_mem_maps->n_maps == VFIO_MAX_USER_MEM_MAPS) {
> >   			RTE_LOG(ERR, EAL, "Not enough space to store partial mapping\n");
> >   			rte_errno = ENOMEM;
> > diff --git a/lib/librte_eal/linux/eal_vfio.h b/lib/librte_eal/linux/eal_vfio.h
> > index cb2d35f..6ebaca6 100644
> > --- a/lib/librte_eal/linux/eal_vfio.h
> > +++ b/lib/librte_eal/linux/eal_vfio.h
> > @@ -113,6 +113,7 @@ typedef int (*vfio_dma_user_func_t)(int fd, uint64_t vaddr, uint64_t iova,
> >   struct vfio_iommu_type {
> >   	int type_id;
> >   	const char *name;
> > +	bool partial_unmap;
> >   	vfio_dma_user_func_t dma_user_map_func;
> >   	vfio_dma_func_t dma_map_func;
> >   };
> > 
> 
> 
> -- 
> Thanks,
> Anatoly

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 1/2] test: add test case to validate VFIO DMA map/unmap
  2020-10-14 14:39   ` Burakov, Anatoly
@ 2020-10-15  9:54     ` " Nithin Dabilpuram
  0 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2020-10-15  9:54 UTC (permalink / raw)
  To: Burakov, Anatoly; +Cc: jerinj, dev

On Wed, Oct 14, 2020 at 03:39:36PM +0100, Burakov, Anatoly wrote:
> External Email
> 
> ----------------------------------------------------------------------
> On 12-Oct-20 9:11 AM, Nithin Dabilpuram wrote:
> > Add test case in test_memory to test VFIO DMA map/unmap.
> > 
> > Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> > ---
> >   app/test/test_memory.c | 79 ++++++++++++++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 79 insertions(+)
> > 
> > diff --git a/app/test/test_memory.c b/app/test/test_memory.c
> > index 7d5ae99..1c56455 100644
> > --- a/app/test/test_memory.c
> > +++ b/app/test/test_memory.c
> > @@ -4,11 +4,16 @@
> >   #include <stdio.h>
> >   #include <stdint.h>
> > +#include <string.h>
> > +#include <sys/mman.h>
> > +#include <unistd.h>
> >   #include <rte_eal.h>
> > +#include <rte_errno.h>
> >   #include <rte_memory.h>
> >   #include <rte_common.h>
> >   #include <rte_memzone.h>
> > +#include <rte_vfio.h>
> >   #include "test.h"
> > @@ -70,6 +75,71 @@ check_seg_fds(const struct rte_memseg_list *msl, const struct rte_memseg *ms,
> >   }
> >   static int
> > +test_memory_vfio_dma_map(void)
> > +{
> > +	uint64_t sz = 2 * sysconf(_SC_PAGESIZE), sz1, sz2;
> 
> i think we now have a function for that, rte_page_size() ?
> 
> Also, i would prefer
> 
> uint64_t sz1, sz2, sz = 2 * rte_page_size();
> 
> Easier to parse IMO.

Ack, will use rte_mem_page_size().
> 
> > +	uint64_t unmap1, unmap2;
> > +	uint8_t *mem;
> > +	int ret;
> > +
> > +	/* Check if vfio is enabled in both kernel and eal */
> > +	ret = rte_vfio_is_enabled("vfio");
> > +	if (!ret)
> > +		return 1;
> 
> No need, rte_vfio_container_dma_map() should set errno to ENODEV if vfio is
> not enabled.

Ack.

> 
> > +
> > +	/* Allocate twice size of page */
> > +	mem = mmap(NULL, sz, PROT_READ | PROT_WRITE,
> > +		   MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> > +	if (mem == MAP_FAILED) {
> > +		printf("Failed to allocate memory for external heap\n");
> > +		return -1;
> > +	}
> > +
> > +	/* Force page allocation */
> > +	memset(mem, 0, sz);
> > +
> > +	/* map the whole region */
> > +	ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD,
> > +					 (uint64_t)mem, (rte_iova_t)mem, sz);
> 
> should be (uintptr_t) perhaps?
> 
> Also, this can return -1 with rte_errno == ENOTSUP, i think this happens if
> there are no devices attached (or if there's no VFIO support, like it would
> be on FreeBSD or Windows).
Ok. Will return 1 if NOTSUP.
> 
> > +	if (ret) {
> > +		printf("Failed to dma map whole region, ret=%d\n", ret);
> > +		goto fail;
> > +	}
> > +
> > +	unmap1 = (uint64_t)mem + (sz / 2);
> > +	sz1 = sz / 2;
> > +	unmap2 = (uint64_t)mem;
> > +	sz2 = sz / 2;
> > +	/* unmap the partial region */
> > +	ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD,
> > +					   unmap1, (rte_iova_t)unmap1, sz1);
> > +	if (ret) {
> > +		if (rte_errno == ENOTSUP) {
> > +			printf("Partial dma unmap not supported\n");
> > +			unmap2 = (uint64_t)mem;
> > +			sz2 = sz;
> > +		} else {
> > +			printf("Failed to unmap send half region, ret=%d(%d)\n",
> 
> I think "send half" is a typo? Also, here and in other places, i would
> prefer a rte_strerror() instead of raw rte_errno number.
Ack.
> 
> > +			       ret, rte_errno);
> > +			goto fail;
> > +		}
> > +	}
> > +
> > +	/* unmap the remaining region */
> > +	ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD,
> > +					   unmap2, (rte_iova_t)unmap2, sz2);
> > +	if (ret) {
> > +		printf("Failed to unmap remaining region, ret=%d(%d)\n", ret,
> > +		       rte_errno);
> > +		goto fail;
> > +	}
> > +
> > +fail:
> > +	munmap(mem, sz);
> > +	return ret;
> > +}
> > +
> > +static int
> >   test_memory(void)
> >   {
> >   	uint64_t s;
> > @@ -101,6 +171,15 @@ test_memory(void)
> >   		return -1;
> >   	}
> > +	/* test for vfio dma map/unmap */
> > +	ret = test_memory_vfio_dma_map();
> > +	if (ret == 1) {
> > +		printf("VFIO dma map/unmap unsupported\n");
> > +	} else if (ret < 0) {
> > +		printf("Error vfio dma map/unmap, ret=%d\n", ret);
> > +		return -1;
> > +	}
> > +
> 
> This looks odd in this autotest. Perhaps create a new autotest for VFIO?
Ack, will add test_vfio.c
> 
> >   	return 0;
> >   }
> > 
> 
> 
> -- 
> Thanks,
> Anatoly

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1
  2020-10-15  6:09     ` [dpdk-dev] [EXT] " Nithin Dabilpuram
@ 2020-10-15 10:00       ` Burakov, Anatoly
  2020-10-15 11:38         ` Nithin Dabilpuram
                           ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Burakov, Anatoly @ 2020-10-15 10:00 UTC (permalink / raw)
  To: Nithin Dabilpuram; +Cc: jerinj, dev, stable

On 15-Oct-20 7:09 AM, Nithin Dabilpuram wrote:
> On Wed, Oct 14, 2020 at 04:07:10PM +0100, Burakov, Anatoly wrote:
>> External Email
>>
>> ----------------------------------------------------------------------
>> On 12-Oct-20 9:11 AM, Nithin Dabilpuram wrote:
>>> Partial unmapping is not supported for VFIO IOMMU type1
>>> by kernel. Though kernel gives return as zero, the unmapped size
>>> returned will not be same as expected. So check for
>>> returned unmap size and return error.
>>>
>>> For case of DMA map/unmap triggered by heap allocations,
>>> maintain granularity of memseg page size so that heap
>>> expansion and contraction does not have this issue.
>>
>> This is quite unfortunate, because there was a different bug that had to do
>> with kernel having a very limited number of mappings available [1], as a
>> result of which the page concatenation code was added.
>>
>> It should therefore be documented that the dma_entry_limit parameter should
>> be adjusted should the user run out of the DMA entries.
>>
>> [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_lkml_155414977872.12780.13728555131525362206.stgit-40gimli.home_T_&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=FZ_tPCbgFOh18zwRPO9H0yDx8VW38vuapifdDfc8SFQ&m=3GMg-634_cdUCY4WpQPwjzZ_S4ckuMHOnt2FxyyjXMk&s=TJLzppkaDS95VGyRHX2hzflQfb9XLK0OiOszSXoeXKk&e=

<snip>

>>>    			RTE_LOG(ERR, EAL, "  cannot clear DMA remapping, error %i (%s)\n",
>>>    					errno, strerror(errno));
>>>    			return -1;
>>> +		} else if (dma_unmap.size != len) {
>>> +			RTE_LOG(ERR, EAL, "  unexpected size %"PRIu64" of DMA "
>>> +				"remapping cleared instead of %"PRIu64"\n",
>>> +				(uint64_t)dma_unmap.size, len);
>>> +			rte_errno = EIO;
>>> +			return -1;
>>>    		}
>>>    	}
>>> @@ -1853,6 +1869,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
>>>    		/* we're partially unmapping a previously mapped region, so we
>>>    		 * need to split entry into two.
>>>    		 */
>>> +		if (!vfio_cfg->vfio_iommu_type->partial_unmap) {
>>> +			RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n");
>>> +			rte_errno = ENOTSUP;
>>> +			ret = -1;
>>> +			goto out;
>>> +		}
>>
>> How would we ever arrive here if we never do more than 1 page worth of
>> memory anyway? I don't think this is needed.
> 
> container_dma_unmap() is called by user via rte_vfio_container_dma_unmap()
> and when he maps we don't split it as we don't about his memory.
> So if he maps multiple pages and tries to unmap partially, then we should fail.

Should we map it in page granularity then, instead of adding this 
discrepancy between EAL and user mapping? I.e. instead of adding a 
workaround, how about we just do the same thing for user mem mappings?

-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1
  2020-10-15 10:00       ` Burakov, Anatoly
@ 2020-10-15 11:38         ` Nithin Dabilpuram
  2020-10-15 11:50         ` Nithin Dabilpuram
  2020-10-15 11:57         ` Nithin Dabilpuram
  2 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2020-10-15 11:38 UTC (permalink / raw)
  To: Burakov, Anatoly; +Cc: jerinj, dev, stable


On Thu, Oct 15, 2020 at 11:00:59AM +0100, Burakov, Anatoly wrote:
> On 15-Oct-20 7:09 AM, Nithin Dabilpuram wrote:
> > On Wed, Oct 14, 2020 at 04:07:10PM +0100, Burakov, Anatoly wrote:
> > > External Email
> > > 
> > > ----------------------------------------------------------------------
> > > On 12-Oct-20 9:11 AM, Nithin Dabilpuram wrote:
> > > > Partial unmapping is not supported for VFIO IOMMU type1
> > > > by kernel. Though kernel gives return as zero, the unmapped size
> > > > returned will not be same as expected. So check for
> > > > returned unmap size and return error.
> > > > 
> > > > For case of DMA map/unmap triggered by heap allocations,
> > > > maintain granularity of memseg page size so that heap
> > > > expansion and contraction does not have this issue.
> > > 
> > > This is quite unfortunate, because there was a different bug that had to do
> > > with kernel having a very limited number of mappings available [1], as a
> > > result of which the page concatenation code was added.
> > > 
> > > It should therefore be documented that the dma_entry_limit parameter should
> > > be adjusted should the user run out of the DMA entries.
> > > 
> > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_lkml_155414977872.12780.13728555131525362206.stgit-40gimli.home_T_&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=FZ_tPCbgFOh18zwRPO9H0yDx8VW38vuapifdDfc8SFQ&m=3GMg-634_cdUCY4WpQPwjzZ_S4ckuMHOnt2FxyyjXMk&s=TJLzppkaDS95VGyRHX2hzflQfb9XLK0OiOszSXoeXKk&e=
> 
> <snip>
> 
> > > >    			RTE_LOG(ERR, EAL, "  cannot clear DMA remapping, error %i (%s)\n",
> > > >    					errno, strerror(errno));
> > > >    			return -1;
> > > > +		} else if (dma_unmap.size != len) {
> > > > +			RTE_LOG(ERR, EAL, "  unexpected size %"PRIu64" of DMA "
> > > > +				"remapping cleared instead of %"PRIu64"\n",
> > > > +				(uint64_t)dma_unmap.size, len);
> > > > +			rte_errno = EIO;
> > > > +			return -1;
> > > >    		}
> > > >    	}
> > > > @@ -1853,6 +1869,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
> > > >    		/* we're partially unmapping a previously mapped region, so we
> > > >    		 * need to split entry into two.
> > > >    		 */
> > > > +		if (!vfio_cfg->vfio_iommu_type->partial_unmap) {
> > > > +			RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n");
> > > > +			rte_errno = ENOTSUP;
> > > > +			ret = -1;
> > > > +			goto out;
> > > > +		}
> > > 
> > > How would we ever arrive here if we never do more than 1 page worth of
> > > memory anyway? I don't think this is needed.
> > 
> > container_dma_unmap() is called by user via rte_vfio_container_dma_unmap()
> > and when he maps we don't split it as we don't about his memory.
> > So if he maps multiple pages and tries to unmap partially, then we should fail.
> 
> Should we map it in page granularity then, instead of adding this
> discrepancy between EAL and user mapping? I.e. instead of adding a
> workaround, how about we just do the same thing for user mem mappings?

In heap mapping's we map and unmap it at huge page granularity as we will always
maintain that.

But here I think we don't know if user's allocation is huge page or collection of system
pages. Only thing we can do here is map it at system page granularity which
could waste entries if he say really is working with hugepages. Isn't ?

> 
> -- 
> Thanks,
> Anatoly

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1
  2020-10-15 10:00       ` Burakov, Anatoly
  2020-10-15 11:38         ` Nithin Dabilpuram
@ 2020-10-15 11:50         ` Nithin Dabilpuram
  2020-10-15 11:57         ` Nithin Dabilpuram
  2 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2020-10-15 11:50 UTC (permalink / raw)
  To: Burakov, Anatoly; +Cc: jerinj, dev, stable


On Thu, Oct 15, 2020 at 11:00:59AM +0100, Burakov, Anatoly wrote:
> On 15-Oct-20 7:09 AM, Nithin Dabilpuram wrote:
> > On Wed, Oct 14, 2020 at 04:07:10PM +0100, Burakov, Anatoly wrote:
> > > External Email
> > > 
> > > ----------------------------------------------------------------------
> > > On 12-Oct-20 9:11 AM, Nithin Dabilpuram wrote:
> > > > Partial unmapping is not supported for VFIO IOMMU type1
> > > > by kernel. Though kernel gives return as zero, the unmapped size
> > > > returned will not be same as expected. So check for
> > > > returned unmap size and return error.
> > > > 
> > > > For case of DMA map/unmap triggered by heap allocations,
> > > > maintain granularity of memseg page size so that heap
> > > > expansion and contraction does not have this issue.
> > > 
> > > This is quite unfortunate, because there was a different bug that had to do
> > > with kernel having a very limited number of mappings available [1], as a
> > > result of which the page concatenation code was added.
> > > 
> > > It should therefore be documented that the dma_entry_limit parameter should
> > > be adjusted should the user run out of the DMA entries.
> > > 
> > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_lkml_155414977872.12780.13728555131525362206.stgit-40gimli.home_T_&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=FZ_tPCbgFOh18zwRPO9H0yDx8VW38vuapifdDfc8SFQ&m=3GMg-634_cdUCY4WpQPwjzZ_S4ckuMHOnt2FxyyjXMk&s=TJLzppkaDS95VGyRHX2hzflQfb9XLK0OiOszSXoeXKk&e=
> 
> <snip>
> 
> > > >    			RTE_LOG(ERR, EAL, "  cannot clear DMA remapping, error %i (%s)\n",
> > > >    					errno, strerror(errno));
> > > >    			return -1;
> > > > +		} else if (dma_unmap.size != len) {
> > > > +			RTE_LOG(ERR, EAL, "  unexpected size %"PRIu64" of DMA "
> > > > +				"remapping cleared instead of %"PRIu64"\n",
> > > > +				(uint64_t)dma_unmap.size, len);
> > > > +			rte_errno = EIO;
> > > > +			return -1;
> > > >    		}
> > > >    	}
> > > > @@ -1853,6 +1869,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
> > > >    		/* we're partially unmapping a previously mapped region, so we
> > > >    		 * need to split entry into two.
> > > >    		 */
> > > > +		if (!vfio_cfg->vfio_iommu_type->partial_unmap) {
> > > > +			RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n");
> > > > +			rte_errno = ENOTSUP;
> > > > +			ret = -1;
> > > > +			goto out;
> > > > +		}
> > > 
> > > How would we ever arrive here if we never do more than 1 page worth of
> > > memory anyway? I don't think this is needed.
> > 
> > container_dma_unmap() is called by user via rte_vfio_container_dma_unmap()
> > and when he maps we don't split it as we don't about his memory.
> > So if he maps multiple pages and tries to unmap partially, then we should fail.
> 
> Should we map it in page granularity then, instead of adding this
> discrepancy between EAL and user mapping? I.e. instead of adding a
> workaround, how about we just do the same thing for user mem mappings?

In heap mapping's we map and unmap it at huge page granularity as we will always
maintain that.

But here I think we don't know if user's allocation is huge page or collection
of system
pages. Only thing we can do here is map it at system page granularity which
could waste entries if he say really is working with hugepages. Isn't ?

> 
> -- 
> Thanks,
> Anatoly

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1
  2020-10-15 10:00       ` Burakov, Anatoly
  2020-10-15 11:38         ` Nithin Dabilpuram
  2020-10-15 11:50         ` Nithin Dabilpuram
@ 2020-10-15 11:57         ` Nithin Dabilpuram
  2020-10-15 15:10           ` Burakov, Anatoly
  2 siblings, 1 reply; 16+ messages in thread
From: Nithin Dabilpuram @ 2020-10-15 11:57 UTC (permalink / raw)
  To: Burakov, Anatoly; +Cc: Nithin Dabilpuram, Jerin Jacob, dev, stable

On Thu, Oct 15, 2020 at 3:31 PM Burakov, Anatoly
<anatoly.burakov@intel.com> wrote:
>
> On 15-Oct-20 7:09 AM, Nithin Dabilpuram wrote:
> > On Wed, Oct 14, 2020 at 04:07:10PM +0100, Burakov, Anatoly wrote:
> >> External Email
> >>
> >> ----------------------------------------------------------------------
> >> On 12-Oct-20 9:11 AM, Nithin Dabilpuram wrote:
> >>> Partial unmapping is not supported for VFIO IOMMU type1
> >>> by kernel. Though kernel gives return as zero, the unmapped size
> >>> returned will not be same as expected. So check for
> >>> returned unmap size and return error.
> >>>
> >>> For case of DMA map/unmap triggered by heap allocations,
> >>> maintain granularity of memseg page size so that heap
> >>> expansion and contraction does not have this issue.
> >>
> >> This is quite unfortunate, because there was a different bug that had to do
> >> with kernel having a very limited number of mappings available [1], as a
> >> result of which the page concatenation code was added.
> >>
> >> It should therefore be documented that the dma_entry_limit parameter should
> >> be adjusted should the user run out of the DMA entries.
> >>
> >> [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_lkml_155414977872.12780.13728555131525362206.stgit-40gimli.home_T_&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=FZ_tPCbgFOh18zwRPO9H0yDx8VW38vuapifdDfc8SFQ&m=3GMg-634_cdUCY4WpQPwjzZ_S4ckuMHOnt2FxyyjXMk&s=TJLzppkaDS95VGyRHX2hzflQfb9XLK0OiOszSXoeXKk&e=
>
> <snip>
>
> >>>                     RTE_LOG(ERR, EAL, "  cannot clear DMA remapping, error %i (%s)\n",
> >>>                                     errno, strerror(errno));
> >>>                     return -1;
> >>> +           } else if (dma_unmap.size != len) {
> >>> +                   RTE_LOG(ERR, EAL, "  unexpected size %"PRIu64" of DMA "
> >>> +                           "remapping cleared instead of %"PRIu64"\n",
> >>> +                           (uint64_t)dma_unmap.size, len);
> >>> +                   rte_errno = EIO;
> >>> +                   return -1;
> >>>             }
> >>>     }
> >>> @@ -1853,6 +1869,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
> >>>             /* we're partially unmapping a previously mapped region, so we
> >>>              * need to split entry into two.
> >>>              */
> >>> +           if (!vfio_cfg->vfio_iommu_type->partial_unmap) {
> >>> +                   RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n");
> >>> +                   rte_errno = ENOTSUP;
> >>> +                   ret = -1;
> >>> +                   goto out;
> >>> +           }
> >>
> >> How would we ever arrive here if we never do more than 1 page worth of
> >> memory anyway? I don't think this is needed.
> >
> > container_dma_unmap() is called by user via rte_vfio_container_dma_unmap()
> > and when he maps we don't split it as we don't about his memory.
> > So if he maps multiple pages and tries to unmap partially, then we should fail.
>
> Should we map it in page granularity then, instead of adding this
> discrepancy between EAL and user mapping? I.e. instead of adding a
> workaround, how about we just do the same thing for user mem mappings?
>
In heap mapping's we map and unmap it at huge page granularity as we will always
maintain that.

But here I think we don't know if user's allocation is huge page or
collection of system
pages. Only thing we can do here is map it at system page granularity which
could waste entries if he say really is working with hugepages. Isn't ?


>
> --
> Thanks,
> Anatoly

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1
  2020-10-15 11:57         ` Nithin Dabilpuram
@ 2020-10-15 15:10           ` Burakov, Anatoly
  2020-10-16  7:10             ` Nithin Dabilpuram
  0 siblings, 1 reply; 16+ messages in thread
From: Burakov, Anatoly @ 2020-10-15 15:10 UTC (permalink / raw)
  To: Nithin Dabilpuram; +Cc: Nithin Dabilpuram, Jerin Jacob, dev, stable

On 15-Oct-20 12:57 PM, Nithin Dabilpuram wrote:
> On Thu, Oct 15, 2020 at 3:31 PM Burakov, Anatoly
> <anatoly.burakov@intel.com> wrote:
>>
>> On 15-Oct-20 7:09 AM, Nithin Dabilpuram wrote:
>>> On Wed, Oct 14, 2020 at 04:07:10PM +0100, Burakov, Anatoly wrote:
>>>> External Email
>>>>
>>>> ----------------------------------------------------------------------
>>>> On 12-Oct-20 9:11 AM, Nithin Dabilpuram wrote:
>>>>> Partial unmapping is not supported for VFIO IOMMU type1
>>>>> by kernel. Though kernel gives return as zero, the unmapped size
>>>>> returned will not be same as expected. So check for
>>>>> returned unmap size and return error.
>>>>>
>>>>> For case of DMA map/unmap triggered by heap allocations,
>>>>> maintain granularity of memseg page size so that heap
>>>>> expansion and contraction does not have this issue.
>>>>
>>>> This is quite unfortunate, because there was a different bug that had to do
>>>> with kernel having a very limited number of mappings available [1], as a
>>>> result of which the page concatenation code was added.
>>>>
>>>> It should therefore be documented that the dma_entry_limit parameter should
>>>> be adjusted should the user run out of the DMA entries.
>>>>
>>>> [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_lkml_155414977872.12780.13728555131525362206.stgit-40gimli.home_T_&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=FZ_tPCbgFOh18zwRPO9H0yDx8VW38vuapifdDfc8SFQ&m=3GMg-634_cdUCY4WpQPwjzZ_S4ckuMHOnt2FxyyjXMk&s=TJLzppkaDS95VGyRHX2hzflQfb9XLK0OiOszSXoeXKk&e=
>>
>> <snip>
>>
>>>>>                      RTE_LOG(ERR, EAL, "  cannot clear DMA remapping, error %i (%s)\n",
>>>>>                                      errno, strerror(errno));
>>>>>                      return -1;
>>>>> +           } else if (dma_unmap.size != len) {
>>>>> +                   RTE_LOG(ERR, EAL, "  unexpected size %"PRIu64" of DMA "
>>>>> +                           "remapping cleared instead of %"PRIu64"\n",
>>>>> +                           (uint64_t)dma_unmap.size, len);
>>>>> +                   rte_errno = EIO;
>>>>> +                   return -1;
>>>>>              }
>>>>>      }
>>>>> @@ -1853,6 +1869,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
>>>>>              /* we're partially unmapping a previously mapped region, so we
>>>>>               * need to split entry into two.
>>>>>               */
>>>>> +           if (!vfio_cfg->vfio_iommu_type->partial_unmap) {
>>>>> +                   RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n");
>>>>> +                   rte_errno = ENOTSUP;
>>>>> +                   ret = -1;
>>>>> +                   goto out;
>>>>> +           }
>>>>
>>>> How would we ever arrive here if we never do more than 1 page worth of
>>>> memory anyway? I don't think this is needed.
>>>
>>> container_dma_unmap() is called by user via rte_vfio_container_dma_unmap()
>>> and when he maps we don't split it as we don't about his memory.
>>> So if he maps multiple pages and tries to unmap partially, then we should fail.
>>
>> Should we map it in page granularity then, instead of adding this
>> discrepancy between EAL and user mapping? I.e. instead of adding a
>> workaround, how about we just do the same thing for user mem mappings?
>>
> In heap mapping's we map and unmap it at huge page granularity as we will always
> maintain that.
> 
> But here I think we don't know if user's allocation is huge page or
> collection of system
> pages. Only thing we can do here is map it at system page granularity which
> could waste entries if he say really is working with hugepages. Isn't ?
> 

Yeah we do. The API mandates the pages granularity, and it will check 
against page size and number of IOVA entries, so yes, we do enforce the 
fact that the IOVA addresses supplied by the user have to be page addresses.

-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1
  2020-10-15 15:10           ` Burakov, Anatoly
@ 2020-10-16  7:10             ` Nithin Dabilpuram
  2020-10-17 16:14               ` Burakov, Anatoly
  0 siblings, 1 reply; 16+ messages in thread
From: Nithin Dabilpuram @ 2020-10-16  7:10 UTC (permalink / raw)
  To: Burakov, Anatoly; +Cc: Jerin Jacob, dev, stable

On Thu, Oct 15, 2020 at 04:10:31PM +0100, Burakov, Anatoly wrote:
> On 15-Oct-20 12:57 PM, Nithin Dabilpuram wrote:
> > On Thu, Oct 15, 2020 at 3:31 PM Burakov, Anatoly
> > <anatoly.burakov@intel.com> wrote:
> > > 
> > > On 15-Oct-20 7:09 AM, Nithin Dabilpuram wrote:
> > > > On Wed, Oct 14, 2020 at 04:07:10PM +0100, Burakov, Anatoly wrote:
> > > > > External Email
> > > > > 
> > > > > ----------------------------------------------------------------------
> > > > > On 12-Oct-20 9:11 AM, Nithin Dabilpuram wrote:
> > > > > > Partial unmapping is not supported for VFIO IOMMU type1
> > > > > > by kernel. Though kernel gives return as zero, the unmapped size
> > > > > > returned will not be same as expected. So check for
> > > > > > returned unmap size and return error.
> > > > > > 
> > > > > > For case of DMA map/unmap triggered by heap allocations,
> > > > > > maintain granularity of memseg page size so that heap
> > > > > > expansion and contraction does not have this issue.
> > > > > 
> > > > > This is quite unfortunate, because there was a different bug that had to do
> > > > > with kernel having a very limited number of mappings available [1], as a
> > > > > result of which the page concatenation code was added.
> > > > > 
> > > > > It should therefore be documented that the dma_entry_limit parameter should
> > > > > be adjusted should the user run out of the DMA entries.
> > > > > 
> > > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_lkml_155414977872.12780.13728555131525362206.stgit-40gimli.home_T_&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=FZ_tPCbgFOh18zwRPO9H0yDx8VW38vuapifdDfc8SFQ&m=3GMg-634_cdUCY4WpQPwjzZ_S4ckuMHOnt2FxyyjXMk&s=TJLzppkaDS95VGyRHX2hzflQfb9XLK0OiOszSXoeXKk&e=
> > > 
> > > <snip>
> > > 
> > > > > >                      RTE_LOG(ERR, EAL, "  cannot clear DMA remapping, error %i (%s)\n",
> > > > > >                                      errno, strerror(errno));
> > > > > >                      return -1;
> > > > > > +           } else if (dma_unmap.size != len) {
> > > > > > +                   RTE_LOG(ERR, EAL, "  unexpected size %"PRIu64" of DMA "
> > > > > > +                           "remapping cleared instead of %"PRIu64"\n",
> > > > > > +                           (uint64_t)dma_unmap.size, len);
> > > > > > +                   rte_errno = EIO;
> > > > > > +                   return -1;
> > > > > >              }
> > > > > >      }
> > > > > > @@ -1853,6 +1869,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
> > > > > >              /* we're partially unmapping a previously mapped region, so we
> > > > > >               * need to split entry into two.
> > > > > >               */
> > > > > > +           if (!vfio_cfg->vfio_iommu_type->partial_unmap) {
> > > > > > +                   RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n");
> > > > > > +                   rte_errno = ENOTSUP;
> > > > > > +                   ret = -1;
> > > > > > +                   goto out;
> > > > > > +           }
> > > > > 
> > > > > How would we ever arrive here if we never do more than 1 page worth of
> > > > > memory anyway? I don't think this is needed.
> > > > 
> > > > container_dma_unmap() is called by user via rte_vfio_container_dma_unmap()
> > > > and when he maps we don't split it as we don't about his memory.
> > > > So if he maps multiple pages and tries to unmap partially, then we should fail.
> > > 
> > > Should we map it in page granularity then, instead of adding this
> > > discrepancy between EAL and user mapping? I.e. instead of adding a
> > > workaround, how about we just do the same thing for user mem mappings?
> > > 
> > In heap mapping's we map and unmap it at huge page granularity as we will always
> > maintain that.
> > 
> > But here I think we don't know if user's allocation is huge page or
> > collection of system
> > pages. Only thing we can do here is map it at system page granularity which
> > could waste entries if he say really is working with hugepages. Isn't ?
> > 
> 
> Yeah we do. The API mandates the pages granularity, and it will check
> against page size and number of IOVA entries, so yes, we do enforce the fact
> that the IOVA addresses supplied by the user have to be page addresses.

If I see rte_vfio_container_dma_map(), there is no mention of Huge page size
user is providing or we computing. He can call rte_vfio_container_dma_map()
with 1GB huge page or 4K system page.

Am I missing something ?
> 
> -- 
> Thanks,
> Anatoly

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1
  2020-10-16  7:10             ` Nithin Dabilpuram
@ 2020-10-17 16:14               ` Burakov, Anatoly
  2020-10-19  9:43                 ` Nithin Dabilpuram
  0 siblings, 1 reply; 16+ messages in thread
From: Burakov, Anatoly @ 2020-10-17 16:14 UTC (permalink / raw)
  To: Nithin Dabilpuram; +Cc: Jerin Jacob, dev, stable

On 16-Oct-20 8:10 AM, Nithin Dabilpuram wrote:
> On Thu, Oct 15, 2020 at 04:10:31PM +0100, Burakov, Anatoly wrote:
>> On 15-Oct-20 12:57 PM, Nithin Dabilpuram wrote:
>>> On Thu, Oct 15, 2020 at 3:31 PM Burakov, Anatoly
>>> <anatoly.burakov@intel.com> wrote:
>>>>
>>>> On 15-Oct-20 7:09 AM, Nithin Dabilpuram wrote:
>>>>> On Wed, Oct 14, 2020 at 04:07:10PM +0100, Burakov, Anatoly wrote:
>>>>>> External Email
>>>>>>
>>>>>> ----------------------------------------------------------------------
>>>>>> On 12-Oct-20 9:11 AM, Nithin Dabilpuram wrote:
>>>>>>> Partial unmapping is not supported for VFIO IOMMU type1
>>>>>>> by kernel. Though kernel gives return as zero, the unmapped size
>>>>>>> returned will not be same as expected. So check for
>>>>>>> returned unmap size and return error.
>>>>>>>
>>>>>>> For case of DMA map/unmap triggered by heap allocations,
>>>>>>> maintain granularity of memseg page size so that heap
>>>>>>> expansion and contraction does not have this issue.
>>>>>>
>>>>>> This is quite unfortunate, because there was a different bug that had to do
>>>>>> with kernel having a very limited number of mappings available [1], as a
>>>>>> result of which the page concatenation code was added.
>>>>>>
>>>>>> It should therefore be documented that the dma_entry_limit parameter should
>>>>>> be adjusted should the user run out of the DMA entries.
>>>>>>
>>>>>> [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_lkml_155414977872.12780.13728555131525362206.stgit-40gimli.home_T_&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=FZ_tPCbgFOh18zwRPO9H0yDx8VW38vuapifdDfc8SFQ&m=3GMg-634_cdUCY4WpQPwjzZ_S4ckuMHOnt2FxyyjXMk&s=TJLzppkaDS95VGyRHX2hzflQfb9XLK0OiOszSXoeXKk&e=
>>>>
>>>> <snip>
>>>>
>>>>>>>                       RTE_LOG(ERR, EAL, "  cannot clear DMA remapping, error %i (%s)\n",
>>>>>>>                                       errno, strerror(errno));
>>>>>>>                       return -1;
>>>>>>> +           } else if (dma_unmap.size != len) {
>>>>>>> +                   RTE_LOG(ERR, EAL, "  unexpected size %"PRIu64" of DMA "
>>>>>>> +                           "remapping cleared instead of %"PRIu64"\n",
>>>>>>> +                           (uint64_t)dma_unmap.size, len);
>>>>>>> +                   rte_errno = EIO;
>>>>>>> +                   return -1;
>>>>>>>               }
>>>>>>>       }
>>>>>>> @@ -1853,6 +1869,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
>>>>>>>               /* we're partially unmapping a previously mapped region, so we
>>>>>>>                * need to split entry into two.
>>>>>>>                */
>>>>>>> +           if (!vfio_cfg->vfio_iommu_type->partial_unmap) {
>>>>>>> +                   RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n");
>>>>>>> +                   rte_errno = ENOTSUP;
>>>>>>> +                   ret = -1;
>>>>>>> +                   goto out;
>>>>>>> +           }
>>>>>>
>>>>>> How would we ever arrive here if we never do more than 1 page worth of
>>>>>> memory anyway? I don't think this is needed.
>>>>>
>>>>> container_dma_unmap() is called by user via rte_vfio_container_dma_unmap()
>>>>> and when he maps we don't split it as we don't about his memory.
>>>>> So if he maps multiple pages and tries to unmap partially, then we should fail.
>>>>
>>>> Should we map it in page granularity then, instead of adding this
>>>> discrepancy between EAL and user mapping? I.e. instead of adding a
>>>> workaround, how about we just do the same thing for user mem mappings?
>>>>
>>> In heap mapping's we map and unmap it at huge page granularity as we will always
>>> maintain that.
>>>
>>> But here I think we don't know if user's allocation is huge page or
>>> collection of system
>>> pages. Only thing we can do here is map it at system page granularity which
>>> could waste entries if he say really is working with hugepages. Isn't ?
>>>
>>
>> Yeah we do. The API mandates the pages granularity, and it will check
>> against page size and number of IOVA entries, so yes, we do enforce the fact
>> that the IOVA addresses supplied by the user have to be page addresses.
> 
> If I see rte_vfio_container_dma_map(), there is no mention of Huge page size
> user is providing or we computing. He can call rte_vfio_container_dma_map()
> with 1GB huge page or 4K system page.
> 
> Am I missing something ?

Are you suggesting that a DMA mapping for hugepage-backed memory will be 
made at system page size granularity? E.g. will a 1GB page-backed 
segment be mapped for DMA as a contiguous 4K-based block?

>>
>> -- 
>> Thanks,
>> Anatoly


-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1
  2020-10-17 16:14               ` Burakov, Anatoly
@ 2020-10-19  9:43                 ` Nithin Dabilpuram
  2020-10-22 12:13                   ` Nithin Dabilpuram
  0 siblings, 1 reply; 16+ messages in thread
From: Nithin Dabilpuram @ 2020-10-19  9:43 UTC (permalink / raw)
  To: Burakov, Anatoly; +Cc: Jerin Jacob, dev, stable

On Sat, Oct 17, 2020 at 05:14:55PM +0100, Burakov, Anatoly wrote:
> On 16-Oct-20 8:10 AM, Nithin Dabilpuram wrote:
> > On Thu, Oct 15, 2020 at 04:10:31PM +0100, Burakov, Anatoly wrote:
> > > On 15-Oct-20 12:57 PM, Nithin Dabilpuram wrote:
> > > > On Thu, Oct 15, 2020 at 3:31 PM Burakov, Anatoly
> > > > <anatoly.burakov@intel.com> wrote:
> > > > > 
> > > > > On 15-Oct-20 7:09 AM, Nithin Dabilpuram wrote:
> > > > > > On Wed, Oct 14, 2020 at 04:07:10PM +0100, Burakov, Anatoly wrote:
> > > > > > > External Email
> > > > > > > 
> > > > > > > ----------------------------------------------------------------------
> > > > > > > On 12-Oct-20 9:11 AM, Nithin Dabilpuram wrote:
> > > > > > > > Partial unmapping is not supported for VFIO IOMMU type1
> > > > > > > > by kernel. Though kernel gives return as zero, the unmapped size
> > > > > > > > returned will not be same as expected. So check for
> > > > > > > > returned unmap size and return error.
> > > > > > > > 
> > > > > > > > For case of DMA map/unmap triggered by heap allocations,
> > > > > > > > maintain granularity of memseg page size so that heap
> > > > > > > > expansion and contraction does not have this issue.
> > > > > > > 
> > > > > > > This is quite unfortunate, because there was a different bug that had to do
> > > > > > > with kernel having a very limited number of mappings available [1], as a
> > > > > > > result of which the page concatenation code was added.
> > > > > > > 
> > > > > > > It should therefore be documented that the dma_entry_limit parameter should
> > > > > > > be adjusted should the user run out of the DMA entries.
> > > > > > > 
> > > > > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_lkml_155414977872.12780.13728555131525362206.stgit-40gimli.home_T_&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=FZ_tPCbgFOh18zwRPO9H0yDx8VW38vuapifdDfc8SFQ&m=3GMg-634_cdUCY4WpQPwjzZ_S4ckuMHOnt2FxyyjXMk&s=TJLzppkaDS95VGyRHX2hzflQfb9XLK0OiOszSXoeXKk&e=
> > > > > 
> > > > > <snip>
> > > > > 
> > > > > > > >                       RTE_LOG(ERR, EAL, "  cannot clear DMA remapping, error %i (%s)\n",
> > > > > > > >                                       errno, strerror(errno));
> > > > > > > >                       return -1;
> > > > > > > > +           } else if (dma_unmap.size != len) {
> > > > > > > > +                   RTE_LOG(ERR, EAL, "  unexpected size %"PRIu64" of DMA "
> > > > > > > > +                           "remapping cleared instead of %"PRIu64"\n",
> > > > > > > > +                           (uint64_t)dma_unmap.size, len);
> > > > > > > > +                   rte_errno = EIO;
> > > > > > > > +                   return -1;
> > > > > > > >               }
> > > > > > > >       }
> > > > > > > > @@ -1853,6 +1869,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
> > > > > > > >               /* we're partially unmapping a previously mapped region, so we
> > > > > > > >                * need to split entry into two.
> > > > > > > >                */
> > > > > > > > +           if (!vfio_cfg->vfio_iommu_type->partial_unmap) {
> > > > > > > > +                   RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n");
> > > > > > > > +                   rte_errno = ENOTSUP;
> > > > > > > > +                   ret = -1;
> > > > > > > > +                   goto out;
> > > > > > > > +           }
> > > > > > > 
> > > > > > > How would we ever arrive here if we never do more than 1 page worth of
> > > > > > > memory anyway? I don't think this is needed.
> > > > > > 
> > > > > > container_dma_unmap() is called by user via rte_vfio_container_dma_unmap()
> > > > > > and when he maps we don't split it as we don't about his memory.
> > > > > > So if he maps multiple pages and tries to unmap partially, then we should fail.
> > > > > 
> > > > > Should we map it in page granularity then, instead of adding this
> > > > > discrepancy between EAL and user mapping? I.e. instead of adding a
> > > > > workaround, how about we just do the same thing for user mem mappings?
> > > > > 
> > > > In heap mapping's we map and unmap it at huge page granularity as we will always
> > > > maintain that.
> > > > 
> > > > But here I think we don't know if user's allocation is huge page or
> > > > collection of system
> > > > pages. Only thing we can do here is map it at system page granularity which
> > > > could waste entries if he say really is working with hugepages. Isn't ?
> > > > 
> > > 
> > > Yeah we do. The API mandates the pages granularity, and it will check
> > > against page size and number of IOVA entries, so yes, we do enforce the fact
> > > that the IOVA addresses supplied by the user have to be page addresses.
> > 
> > If I see rte_vfio_container_dma_map(), there is no mention of Huge page size
> > user is providing or we computing. He can call rte_vfio_container_dma_map()
> > with 1GB huge page or 4K system page.
> > 
> > Am I missing something ?
> 
> Are you suggesting that a DMA mapping for hugepage-backed memory will be
> made at system page size granularity? E.g. will a 1GB page-backed segment be
> mapped for DMA as a contiguous 4K-based block?

I'm not suggesting anything. My only thought is how to solve below problem.
Say application does the following.

#1 Allocate 1GB memory from huge page or some external mem.
#2 Do rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD, mem, mem, 1GB)
   In linux/eal_vfio.c, we map it is as single VFIO DMA entry of 1 GB as we
   don't know where this memory is coming from or backed by what.
#3 After a while call rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, mem+4KB, mem+4KB, 4KB)
 
Though rte_vfio_container_dma_unmap() supports #3 by splitting entry as shown below,
In VFIO type1 iommu, #3 cannot be supported by current kernel interface. So how
can we allow #3 ?


static int
container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
                uint64_t len) 
{
        struct user_mem_map *map, *new_map = NULL;
        struct user_mem_maps *user_mem_maps;
        int ret = 0; 

        user_mem_maps = &vfio_cfg->mem_maps;
        rte_spinlock_recursive_lock(&user_mem_maps->lock);

        /* find our mapping */
        map = find_user_mem_map(user_mem_maps, vaddr, iova, len);
        if (!map) {
                RTE_LOG(ERR, EAL, "Couldn't find previously mapped region\n");
                rte_errno = EINVAL;
                ret = -1;
                goto out; 
        }
        if (map->addr != vaddr || map->iova != iova || map->len != len) {
                /* we're partially unmapping a previously mapped region, so we
                 * need to split entry into two.
                 */


> 
> > > 
> > > -- 
> > > Thanks,
> > > Anatoly
> 
> 
> -- 
> Thanks,
> Anatoly

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1
  2020-10-19  9:43                 ` Nithin Dabilpuram
@ 2020-10-22 12:13                   ` Nithin Dabilpuram
  0 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2020-10-22 12:13 UTC (permalink / raw)
  To: Burakov, Anatoly; +Cc: Jerin Jacob, dev, stable

Ping.

On Mon, Oct 19, 2020 at 03:13:15PM +0530, Nithin Dabilpuram wrote:
> On Sat, Oct 17, 2020 at 05:14:55PM +0100, Burakov, Anatoly wrote:
> > On 16-Oct-20 8:10 AM, Nithin Dabilpuram wrote:
> > > On Thu, Oct 15, 2020 at 04:10:31PM +0100, Burakov, Anatoly wrote:
> > > > On 15-Oct-20 12:57 PM, Nithin Dabilpuram wrote:
> > > > > On Thu, Oct 15, 2020 at 3:31 PM Burakov, Anatoly
> > > > > <anatoly.burakov@intel.com> wrote:
> > > > > > 
> > > > > > On 15-Oct-20 7:09 AM, Nithin Dabilpuram wrote:
> > > > > > > On Wed, Oct 14, 2020 at 04:07:10PM +0100, Burakov, Anatoly wrote:
> > > > > > > > External Email
> > > > > > > > 
> > > > > > > > ----------------------------------------------------------------------
> > > > > > > > On 12-Oct-20 9:11 AM, Nithin Dabilpuram wrote:
> > > > > > > > > Partial unmapping is not supported for VFIO IOMMU type1
> > > > > > > > > by kernel. Though kernel gives return as zero, the unmapped size
> > > > > > > > > returned will not be same as expected. So check for
> > > > > > > > > returned unmap size and return error.
> > > > > > > > > 
> > > > > > > > > For case of DMA map/unmap triggered by heap allocations,
> > > > > > > > > maintain granularity of memseg page size so that heap
> > > > > > > > > expansion and contraction does not have this issue.
> > > > > > > > 
> > > > > > > > This is quite unfortunate, because there was a different bug that had to do
> > > > > > > > with kernel having a very limited number of mappings available [1], as a
> > > > > > > > result of which the page concatenation code was added.
> > > > > > > > 
> > > > > > > > It should therefore be documented that the dma_entry_limit parameter should
> > > > > > > > be adjusted should the user run out of the DMA entries.
> > > > > > > > 
> > > > > > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_lkml_155414977872.12780.13728555131525362206.stgit-40gimli.home_T_&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=FZ_tPCbgFOh18zwRPO9H0yDx8VW38vuapifdDfc8SFQ&m=3GMg-634_cdUCY4WpQPwjzZ_S4ckuMHOnt2FxyyjXMk&s=TJLzppkaDS95VGyRHX2hzflQfb9XLK0OiOszSXoeXKk&e=
> > > > > > 
> > > > > > <snip>
> > > > > > 
> > > > > > > > >                       RTE_LOG(ERR, EAL, "  cannot clear DMA remapping, error %i (%s)\n",
> > > > > > > > >                                       errno, strerror(errno));
> > > > > > > > >                       return -1;
> > > > > > > > > +           } else if (dma_unmap.size != len) {
> > > > > > > > > +                   RTE_LOG(ERR, EAL, "  unexpected size %"PRIu64" of DMA "
> > > > > > > > > +                           "remapping cleared instead of %"PRIu64"\n",
> > > > > > > > > +                           (uint64_t)dma_unmap.size, len);
> > > > > > > > > +                   rte_errno = EIO;
> > > > > > > > > +                   return -1;
> > > > > > > > >               }
> > > > > > > > >       }
> > > > > > > > > @@ -1853,6 +1869,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
> > > > > > > > >               /* we're partially unmapping a previously mapped region, so we
> > > > > > > > >                * need to split entry into two.
> > > > > > > > >                */
> > > > > > > > > +           if (!vfio_cfg->vfio_iommu_type->partial_unmap) {
> > > > > > > > > +                   RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n");
> > > > > > > > > +                   rte_errno = ENOTSUP;
> > > > > > > > > +                   ret = -1;
> > > > > > > > > +                   goto out;
> > > > > > > > > +           }
> > > > > > > > 
> > > > > > > > How would we ever arrive here if we never do more than 1 page worth of
> > > > > > > > memory anyway? I don't think this is needed.
> > > > > > > 
> > > > > > > container_dma_unmap() is called by user via rte_vfio_container_dma_unmap()
> > > > > > > and when he maps we don't split it as we don't about his memory.
> > > > > > > So if he maps multiple pages and tries to unmap partially, then we should fail.
> > > > > > 
> > > > > > Should we map it in page granularity then, instead of adding this
> > > > > > discrepancy between EAL and user mapping? I.e. instead of adding a
> > > > > > workaround, how about we just do the same thing for user mem mappings?
> > > > > > 
> > > > > In heap mapping's we map and unmap it at huge page granularity as we will always
> > > > > maintain that.
> > > > > 
> > > > > But here I think we don't know if user's allocation is huge page or
> > > > > collection of system
> > > > > pages. Only thing we can do here is map it at system page granularity which
> > > > > could waste entries if he say really is working with hugepages. Isn't ?
> > > > > 
> > > > 
> > > > Yeah we do. The API mandates the pages granularity, and it will check
> > > > against page size and number of IOVA entries, so yes, we do enforce the fact
> > > > that the IOVA addresses supplied by the user have to be page addresses.
> > > 
> > > If I see rte_vfio_container_dma_map(), there is no mention of Huge page size
> > > user is providing or we computing. He can call rte_vfio_container_dma_map()
> > > with 1GB huge page or 4K system page.
> > > 
> > > Am I missing something ?
> > 
> > Are you suggesting that a DMA mapping for hugepage-backed memory will be
> > made at system page size granularity? E.g. will a 1GB page-backed segment be
> > mapped for DMA as a contiguous 4K-based block?
> 
> I'm not suggesting anything. My only thought is how to solve below problem.
> Say application does the following.
> 
> #1 Allocate 1GB memory from huge page or some external mem.
> #2 Do rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD, mem, mem, 1GB)
>    In linux/eal_vfio.c, we map it is as single VFIO DMA entry of 1 GB as we
>    don't know where this memory is coming from or backed by what.
> #3 After a while call rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, mem+4KB, mem+4KB, 4KB)
>  
> Though rte_vfio_container_dma_unmap() supports #3 by splitting entry as shown below,
> In VFIO type1 iommu, #3 cannot be supported by current kernel interface. So how
> can we allow #3 ?
> 
> 
> static int
> container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
>                 uint64_t len) 
> {
>         struct user_mem_map *map, *new_map = NULL;
>         struct user_mem_maps *user_mem_maps;
>         int ret = 0; 
> 
>         user_mem_maps = &vfio_cfg->mem_maps;
>         rte_spinlock_recursive_lock(&user_mem_maps->lock);
> 
>         /* find our mapping */
>         map = find_user_mem_map(user_mem_maps, vaddr, iova, len);
>         if (!map) {
>                 RTE_LOG(ERR, EAL, "Couldn't find previously mapped region\n");
>                 rte_errno = EINVAL;
>                 ret = -1;
>                 goto out; 
>         }
>         if (map->addr != vaddr || map->iova != iova || map->len != len) {
>                 /* we're partially unmapping a previously mapped region, so we
>                  * need to split entry into two.
>                  */
> 
> 
> > 
> > > > 
> > > > -- 
> > > > Thanks,
> > > > Anatoly
> > 
> > 
> > -- 
> > Thanks,
> > Anatoly

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, back to index

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-12  8:11 [dpdk-dev] [PATCH 0/2] fix issue with partial DMA unmap Nithin Dabilpuram
2020-10-12  8:11 ` [dpdk-dev] [PATCH 1/2] test: add test case to validate VFIO DMA map/unmap Nithin Dabilpuram
2020-10-14 14:39   ` Burakov, Anatoly
2020-10-15  9:54     ` [dpdk-dev] [EXT] " Nithin Dabilpuram
2020-10-12  8:11 ` [dpdk-dev] [PATCH 2/2] vfio: fix partial DMA unmapping for VFIO type1 Nithin Dabilpuram
2020-10-14 15:07   ` Burakov, Anatoly
2020-10-15  6:09     ` [dpdk-dev] [EXT] " Nithin Dabilpuram
2020-10-15 10:00       ` Burakov, Anatoly
2020-10-15 11:38         ` Nithin Dabilpuram
2020-10-15 11:50         ` Nithin Dabilpuram
2020-10-15 11:57         ` Nithin Dabilpuram
2020-10-15 15:10           ` Burakov, Anatoly
2020-10-16  7:10             ` Nithin Dabilpuram
2020-10-17 16:14               ` Burakov, Anatoly
2020-10-19  9:43                 ` Nithin Dabilpuram
2020-10-22 12:13                   ` Nithin Dabilpuram

DPDK patches and discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ http://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/ public-inbox