* [PATCH v2 0/2] vhost: fix async address mapping @ 2022-01-19 15:10 xuan.ding 2022-01-19 15:10 ` [PATCH v2 1/2] vhost: rename field in guest page struct xuan.ding ` (3 more replies) 0 siblings, 4 replies; 11+ messages in thread From: xuan.ding @ 2022-01-19 15:10 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia; +Cc: dev, jiayu.hu, yuanx.wang, Xuan Ding From: Xuan Ding <xuan.ding@intel.com> This patchset fixes the issue of incorrect DMA mapping in PA mode. Due to the ambiguity of host_phys_addr naming in the guest page struct, rename it to host_iova. v2: * Change the order of patch. Xuan Ding (2): vhost: rename field in guest page struct vhost: fix physical address mapping lib/vhost/vhost.h | 11 ++-- lib/vhost/vhost_user.c | 130 ++++++++++++++++++++--------------------- lib/vhost/virtio_net.c | 11 ++-- 3 files changed, 75 insertions(+), 77 deletions(-) -- 2.17.1 ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 1/2] vhost: rename field in guest page struct 2022-01-19 15:10 [PATCH v2 0/2] vhost: fix async address mapping xuan.ding @ 2022-01-19 15:10 ` xuan.ding 2022-02-01 8:47 ` Maxime Coquelin 2022-01-19 15:10 ` [PATCH v2 2/2] vhost: fix physical address mapping xuan.ding ` (2 subsequent siblings) 3 siblings, 1 reply; 11+ messages in thread From: xuan.ding @ 2022-01-19 15:10 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia; +Cc: dev, jiayu.hu, yuanx.wang, Xuan Ding From: Xuan Ding <xuan.ding@intel.com> This patch renames the host_phys_addr to host_iova in guest_page struct. The host_phys_addr is iova, it depends on the DPDK IOVA mode. Signed-off-by: Xuan Ding <xuan.ding@intel.com> --- lib/vhost/vhost.h | 10 +++++----- lib/vhost/vhost_user.c | 20 ++++++++++---------- lib/vhost/virtio_net.c | 11 ++++++----- 3 files changed, 21 insertions(+), 20 deletions(-) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 7085e0885c..ca7f58039d 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -354,7 +354,7 @@ struct vring_packed_desc_event { struct guest_page { uint64_t guest_phys_addr; - uint64_t host_phys_addr; + uint64_t host_iova; uint64_t size; }; @@ -604,13 +604,13 @@ gpa_to_first_hpa(struct virtio_net *dev, uint64_t gpa, if (gpa + gpa_size <= page->guest_phys_addr + page->size) { return gpa - page->guest_phys_addr + - page->host_phys_addr; + page->host_iova; } else if (gpa < page->guest_phys_addr + page->size) { *hpa_size = page->guest_phys_addr + page->size - gpa; return gpa - page->guest_phys_addr + - page->host_phys_addr; + page->host_iova; } } } else { @@ -621,13 +621,13 @@ gpa_to_first_hpa(struct virtio_net *dev, uint64_t gpa, if (gpa + gpa_size <= page->guest_phys_addr + page->size) { return gpa - page->guest_phys_addr + - page->host_phys_addr; + page->host_iova; } else if (gpa < page->guest_phys_addr + page->size) { *hpa_size = page->guest_phys_addr + page->size - gpa; return gpa - page->guest_phys_addr + - page->host_phys_addr; + page->host_iova; } } } diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index a781346c4d..95c9df697e 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -978,7 +978,7 @@ vhost_user_set_vring_base(struct virtio_net **pdev, static int add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, - uint64_t host_phys_addr, uint64_t size) + uint64_t host_iova, uint64_t size) { struct guest_page *page, *last_page; struct guest_page *old_pages; @@ -999,7 +999,7 @@ add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, if (dev->nr_guest_pages > 0) { last_page = &dev->guest_pages[dev->nr_guest_pages - 1]; /* merge if the two pages are continuous */ - if (host_phys_addr == last_page->host_phys_addr + + if (host_iova == last_page->host_iova + last_page->size) { last_page->size += size; return 0; @@ -1008,7 +1008,7 @@ add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, page = &dev->guest_pages[dev->nr_guest_pages++]; page->guest_phys_addr = guest_phys_addr; - page->host_phys_addr = host_phys_addr; + page->host_iova = host_iova; page->size = size; return 0; @@ -1021,14 +1021,14 @@ add_guest_pages(struct virtio_net *dev, struct rte_vhost_mem_region *reg, uint64_t reg_size = reg->size; uint64_t host_user_addr = reg->host_user_addr; uint64_t guest_phys_addr = reg->guest_phys_addr; - uint64_t host_phys_addr; + uint64_t host_iova; uint64_t size; - host_phys_addr = rte_mem_virt2iova((void *)(uintptr_t)host_user_addr); + host_iova = rte_mem_virt2iova((void *)(uintptr_t)host_user_addr); size = page_size - (guest_phys_addr & (page_size - 1)); size = RTE_MIN(size, reg_size); - if (add_one_guest_page(dev, guest_phys_addr, host_phys_addr, size) < 0) + if (add_one_guest_page(dev, guest_phys_addr, host_iova, size) < 0) return -1; host_user_addr += size; @@ -1037,9 +1037,9 @@ add_guest_pages(struct virtio_net *dev, struct rte_vhost_mem_region *reg, while (reg_size > 0) { size = RTE_MIN(reg_size, page_size); - host_phys_addr = rte_mem_virt2iova((void *)(uintptr_t) + host_iova = rte_mem_virt2iova((void *)(uintptr_t) host_user_addr); - if (add_one_guest_page(dev, guest_phys_addr, host_phys_addr, + if (add_one_guest_page(dev, guest_phys_addr, host_iova, size) < 0) return -1; @@ -1071,11 +1071,11 @@ dump_guest_pages(struct virtio_net *dev) VHOST_LOG_CONFIG(INFO, "guest physical page region %u\n" "\t guest_phys_addr: %" PRIx64 "\n" - "\t host_phys_addr : %" PRIx64 "\n" + "\t host_iova : %" PRIx64 "\n" "\t size : %" PRIx64 "\n", i, page->guest_phys_addr, - page->host_phys_addr, + page->host_iova, page->size); } } diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index b3d954aab4..226dcc8b18 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -870,20 +870,21 @@ async_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, struct vhost_async *async = vq->async; uint64_t mapped_len; uint32_t buf_offset = 0; - void *hpa; + void *host_iova; while (cpy_len) { - hpa = (void *)(uintptr_t)gpa_to_first_hpa(dev, + host_iova = (void *)(uintptr_t)gpa_to_first_hpa(dev, buf_iova + buf_offset, cpy_len, &mapped_len); - if (unlikely(!hpa)) { - VHOST_LOG_DATA(ERR, "(%d) %s: failed to get hpa.\n", dev->vid, __func__); + if (unlikely(!host_iova)) { + VHOST_LOG_DATA(ERR, "(%d) %s: failed to get host iova.\n", + dev->vid, __func__); return -1; } if (unlikely(async_iter_add_iovec(async, (void *)(uintptr_t)rte_pktmbuf_iova_offset(m, mbuf_offset), - hpa, (size_t)mapped_len))) + host_iova, (size_t)mapped_len))) return -1; cpy_len -= (uint32_t)mapped_len; -- 2.17.1 ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 1/2] vhost: rename field in guest page struct 2022-01-19 15:10 ` [PATCH v2 1/2] vhost: rename field in guest page struct xuan.ding @ 2022-02-01 8:47 ` Maxime Coquelin 0 siblings, 0 replies; 11+ messages in thread From: Maxime Coquelin @ 2022-02-01 8:47 UTC (permalink / raw) To: xuan.ding, chenbo.xia; +Cc: dev, jiayu.hu, yuanx.wang On 1/19/22 16:10, xuan.ding@intel.com wrote: > From: Xuan Ding <xuan.ding@intel.com> > > This patch renames the host_phys_addr to host_iova in guest_page > struct. The host_phys_addr is iova, it depends on the DPDK > IOVA mode. > > Signed-off-by: Xuan Ding <xuan.ding@intel.com> > --- > lib/vhost/vhost.h | 10 +++++----- > lib/vhost/vhost_user.c | 20 ++++++++++---------- > lib/vhost/virtio_net.c | 11 ++++++----- > 3 files changed, 21 insertions(+), 20 deletions(-) > Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com> Thanks, Maxime ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 2/2] vhost: fix physical address mapping 2022-01-19 15:10 [PATCH v2 0/2] vhost: fix async address mapping xuan.ding 2022-01-19 15:10 ` [PATCH v2 1/2] vhost: rename field in guest page struct xuan.ding @ 2022-01-19 15:10 ` xuan.ding 2022-02-01 8:51 ` Maxime Coquelin 2022-02-04 10:43 ` Maxime Coquelin 2022-02-01 8:28 ` [PATCH v2 0/2] vhost: fix async " Maxime Coquelin 2022-02-04 10:56 ` Maxime Coquelin 3 siblings, 2 replies; 11+ messages in thread From: xuan.ding @ 2022-01-19 15:10 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia; +Cc: dev, jiayu.hu, yuanx.wang, Xuan Ding From: Xuan Ding <xuan.ding@intel.com> When choosing IOVA as PA mode, IOVA is likely to be discontinuous, which requires page by page mapping for DMA devices. To be consistent, this patch implements page by page mapping instead of mapping at the region granularity for both IOVA as VA and PA mode. Fixes: 7c61fa08b716 ("vhost: enable IOMMU for async vhost") Signed-off-by: Xuan Ding <xuan.ding@intel.com> Signed-off-by: Yuan Wang <yuanx.wang@intel.com> --- lib/vhost/vhost.h | 1 + lib/vhost/vhost_user.c | 116 ++++++++++++++++++++--------------------- 2 files changed, 57 insertions(+), 60 deletions(-) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index ca7f58039d..9521ae56da 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -355,6 +355,7 @@ struct vring_packed_desc_event { struct guest_page { uint64_t guest_phys_addr; uint64_t host_iova; + uint64_t host_user_addr; uint64_t size; }; diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 95c9df697e..48c08716ba 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -143,57 +143,56 @@ get_blk_size(int fd) return ret == -1 ? (uint64_t)-1 : (uint64_t)stat.st_blksize; } -static int -async_dma_map(struct rte_vhost_mem_region *region, bool do_map) +static void +async_dma_map(struct virtio_net *dev, bool do_map) { - uint64_t host_iova; int ret = 0; - - host_iova = rte_mem_virt2iova((void *)(uintptr_t)region->host_user_addr); + uint32_t i; + struct guest_page *page; if (do_map) { - /* Add mapped region into the default container of DPDK. */ - ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD, - region->host_user_addr, - host_iova, - region->size); - if (ret) { - /* - * DMA device may bind with kernel driver, in this case, - * we don't need to program IOMMU manually. However, if no - * device is bound with vfio/uio in DPDK, and vfio kernel - * module is loaded, the API will still be called and return - * with ENODEV/ENOSUP. - * - * DPDK vfio only returns ENODEV/ENOSUP in very similar - * situations(vfio either unsupported, or supported - * but no devices found). Either way, no mappings could be - * performed. We treat it as normal case in async path. - */ - if (rte_errno == ENODEV || rte_errno == ENOTSUP) - return 0; - - VHOST_LOG_CONFIG(ERR, "DMA engine map failed\n"); - /* DMA mapping errors won't stop VHST_USER_SET_MEM_TABLE. */ - return 0; + for (i = 0; i < dev->nr_guest_pages; i++) { + page = &dev->guest_pages[i]; + ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD, + page->host_user_addr, + page->host_iova, + page->size); + if (ret) { + /* + * DMA device may bind with kernel driver, in this case, + * we don't need to program IOMMU manually. However, if no + * device is bound with vfio/uio in DPDK, and vfio kernel + * module is loaded, the API will still be called and return + * with ENODEV. + * + * DPDK vfio only returns ENODEV in very similar situations + * (vfio either unsupported, or supported but no devices found). + * Either way, no mappings could be performed. We treat it as + * normal case in async path. This is a workaround. + */ + if (rte_errno == ENODEV) + return; + + /* DMA mapping errors won't stop VHOST_USER_SET_MEM_TABLE. */ + VHOST_LOG_CONFIG(ERR, "DMA engine map failed\n"); + } } } else { - /* Remove mapped region from the default container of DPDK. */ - ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, - region->host_user_addr, - host_iova, - region->size); - if (ret) { - /* like DMA map, ignore the kernel driver case when unmap. */ - if (rte_errno == EINVAL) - return 0; - - VHOST_LOG_CONFIG(ERR, "DMA engine unmap failed\n"); - return ret; + for (i = 0; i < dev->nr_guest_pages; i++) { + page = &dev->guest_pages[i]; + ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, + page->host_user_addr, + page->host_iova, + page->size); + if (ret) { + /* like DMA map, ignore the kernel driver case when unmap. */ + if (rte_errno == EINVAL) + return; + + VHOST_LOG_CONFIG(ERR, "DMA engine unmap failed\n"); + } } } - - return ret; } static void @@ -205,12 +204,12 @@ free_mem_region(struct virtio_net *dev) if (!dev || !dev->mem) return; + if (dev->async_copy && rte_vfio_is_enabled("vfio")) + async_dma_map(dev, false); + for (i = 0; i < dev->mem->nregions; i++) { reg = &dev->mem->regions[i]; if (reg->host_user_addr) { - if (dev->async_copy && rte_vfio_is_enabled("vfio")) - async_dma_map(reg, false); - munmap(reg->mmap_addr, reg->mmap_size); close(reg->fd); } @@ -978,7 +977,7 @@ vhost_user_set_vring_base(struct virtio_net **pdev, static int add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, - uint64_t host_iova, uint64_t size) + uint64_t host_iova, uint64_t host_user_addr, uint64_t size) { struct guest_page *page, *last_page; struct guest_page *old_pages; @@ -999,8 +998,9 @@ add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, if (dev->nr_guest_pages > 0) { last_page = &dev->guest_pages[dev->nr_guest_pages - 1]; /* merge if the two pages are continuous */ - if (host_iova == last_page->host_iova + - last_page->size) { + if (host_iova == last_page->host_iova + last_page->size + && guest_phys_addr == last_page->guest_phys_addr + last_page->size + && host_user_addr == last_page->host_user_addr + last_page->size) { last_page->size += size; return 0; } @@ -1009,6 +1009,7 @@ add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, page = &dev->guest_pages[dev->nr_guest_pages++]; page->guest_phys_addr = guest_phys_addr; page->host_iova = host_iova; + page->host_user_addr = host_user_addr; page->size = size; return 0; @@ -1028,7 +1029,8 @@ add_guest_pages(struct virtio_net *dev, struct rte_vhost_mem_region *reg, size = page_size - (guest_phys_addr & (page_size - 1)); size = RTE_MIN(size, reg_size); - if (add_one_guest_page(dev, guest_phys_addr, host_iova, size) < 0) + if (add_one_guest_page(dev, guest_phys_addr, host_iova, + host_user_addr, size) < 0) return -1; host_user_addr += size; @@ -1040,7 +1042,7 @@ add_guest_pages(struct virtio_net *dev, struct rte_vhost_mem_region *reg, host_iova = rte_mem_virt2iova((void *)(uintptr_t) host_user_addr); if (add_one_guest_page(dev, guest_phys_addr, host_iova, - size) < 0) + host_user_addr, size) < 0) return -1; host_user_addr += size; @@ -1215,7 +1217,6 @@ vhost_user_mmap_region(struct virtio_net *dev, uint64_t mmap_size; uint64_t alignment; int populate; - int ret; /* Check for memory_size + mmap_offset overflow */ if (mmap_offset >= -region->size) { @@ -1274,14 +1275,6 @@ vhost_user_mmap_region(struct virtio_net *dev, VHOST_LOG_CONFIG(ERR, "adding guest pages to region failed.\n"); return -1; } - - if (rte_vfio_is_enabled("vfio")) { - ret = async_dma_map(region, true); - if (ret) { - VHOST_LOG_CONFIG(ERR, "Configure IOMMU for DMA engine failed\n"); - return -1; - } - } } VHOST_LOG_CONFIG(INFO, @@ -1420,6 +1413,9 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, dev->mem->nregions++; } + if (dev->async_copy && rte_vfio_is_enabled("vfio")) + async_dma_map(dev, true); + if (vhost_user_postcopy_register(dev, main_fd, msg) < 0) goto free_mem_table; -- 2.17.1 ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] vhost: fix physical address mapping 2022-01-19 15:10 ` [PATCH v2 2/2] vhost: fix physical address mapping xuan.ding @ 2022-02-01 8:51 ` Maxime Coquelin 2022-02-04 10:43 ` Maxime Coquelin 1 sibling, 0 replies; 11+ messages in thread From: Maxime Coquelin @ 2022-02-01 8:51 UTC (permalink / raw) To: xuan.ding, chenbo.xia; +Cc: dev, jiayu.hu, yuanx.wang On 1/19/22 16:10, xuan.ding@intel.com wrote: > From: Xuan Ding <xuan.ding@intel.com> > > When choosing IOVA as PA mode, IOVA is likely to be discontinuous, > which requires page by page mapping for DMA devices. To be consistent, > this patch implements page by page mapping instead of mapping at the > region granularity for both IOVA as VA and PA mode. > > Fixes: 7c61fa08b716 ("vhost: enable IOMMU for async vhost") > > Signed-off-by: Xuan Ding <xuan.ding@intel.com> > Signed-off-by: Yuan Wang <yuanx.wang@intel.com> > --- > lib/vhost/vhost.h | 1 + > lib/vhost/vhost_user.c | 116 ++++++++++++++++++++--------------------- > 2 files changed, 57 insertions(+), 60 deletions(-) > > diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h > index ca7f58039d..9521ae56da 100644 > --- a/lib/vhost/vhost.h > +++ b/lib/vhost/vhost.h > @@ -355,6 +355,7 @@ struct vring_packed_desc_event { > struct guest_page { > uint64_t guest_phys_addr; > uint64_t host_iova; > + uint64_t host_user_addr; > uint64_t size; > }; > > diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c > index 95c9df697e..48c08716ba 100644 > --- a/lib/vhost/vhost_user.c > +++ b/lib/vhost/vhost_user.c > @@ -143,57 +143,56 @@ get_blk_size(int fd) > return ret == -1 ? (uint64_t)-1 : (uint64_t)stat.st_blksize; > } > > -static int > -async_dma_map(struct rte_vhost_mem_region *region, bool do_map) > +static void > +async_dma_map(struct virtio_net *dev, bool do_map) > { > - uint64_t host_iova; > int ret = 0; > - > - host_iova = rte_mem_virt2iova((void *)(uintptr_t)region->host_user_addr); > + uint32_t i; > + struct guest_page *page; Add new line. > if (do_map) { > - /* Add mapped region into the default container of DPDK. */ > - ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD, > - region->host_user_addr, > - host_iova, > - region->size); > - if (ret) { > - /* > - * DMA device may bind with kernel driver, in this case, > - * we don't need to program IOMMU manually. However, if no > - * device is bound with vfio/uio in DPDK, and vfio kernel > - * module is loaded, the API will still be called and return > - * with ENODEV/ENOSUP. > - * > - * DPDK vfio only returns ENODEV/ENOSUP in very similar > - * situations(vfio either unsupported, or supported > - * but no devices found). Either way, no mappings could be > - * performed. We treat it as normal case in async path. > - */ > - if (rte_errno == ENODEV || rte_errno == ENOTSUP) > - return 0; > - > - VHOST_LOG_CONFIG(ERR, "DMA engine map failed\n"); > - /* DMA mapping errors won't stop VHST_USER_SET_MEM_TABLE. */ > - return 0; > + for (i = 0; i < dev->nr_guest_pages; i++) { > + page = &dev->guest_pages[i]; > + ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD, > + page->host_user_addr, > + page->host_iova, > + page->size); > + if (ret) { > + /* > + * DMA device may bind with kernel driver, in this case, > + * we don't need to program IOMMU manually. However, if no > + * device is bound with vfio/uio in DPDK, and vfio kernel > + * module is loaded, the API will still be called and return > + * with ENODEV. > + * > + * DPDK vfio only returns ENODEV in very similar situations > + * (vfio either unsupported, or supported but no devices found). > + * Either way, no mappings could be performed. We treat it as > + * normal case in async path. This is a workaround. > + */ > + if (rte_errno == ENODEV) > + return; > + > + /* DMA mapping errors won't stop VHOST_USER_SET_MEM_TABLE. */ > + VHOST_LOG_CONFIG(ERR, "DMA engine map failed\n"); > + } > } > > } else { > - /* Remove mapped region from the default container of DPDK. */ > - ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, > - region->host_user_addr, > - host_iova, > - region->size); > - if (ret) { > - /* like DMA map, ignore the kernel driver case when unmap. */ > - if (rte_errno == EINVAL) > - return 0; > - > - VHOST_LOG_CONFIG(ERR, "DMA engine unmap failed\n"); > - return ret; > + for (i = 0; i < dev->nr_guest_pages; i++) { > + page = &dev->guest_pages[i]; > + ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, > + page->host_user_addr, > + page->host_iova, > + page->size); > + if (ret) { > + /* like DMA map, ignore the kernel driver case when unmap. */ > + if (rte_errno == EINVAL) > + return; > + > + VHOST_LOG_CONFIG(ERR, "DMA engine unmap failed\n"); > + } > } > } > - > - return ret; > } > > static void > @@ -205,12 +204,12 @@ free_mem_region(struct virtio_net *dev) > if (!dev || !dev->mem) > return; > > + if (dev->async_copy && rte_vfio_is_enabled("vfio")) > + async_dma_map(dev, false); > + > for (i = 0; i < dev->mem->nregions; i++) { > reg = &dev->mem->regions[i]; > if (reg->host_user_addr) { > - if (dev->async_copy && rte_vfio_is_enabled("vfio")) > - async_dma_map(reg, false); > - > munmap(reg->mmap_addr, reg->mmap_size); > close(reg->fd); > } > @@ -978,7 +977,7 @@ vhost_user_set_vring_base(struct virtio_net **pdev, > > static int > add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, > - uint64_t host_iova, uint64_t size) > + uint64_t host_iova, uint64_t host_user_addr, uint64_t size) > { > struct guest_page *page, *last_page; > struct guest_page *old_pages; > @@ -999,8 +998,9 @@ add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, > if (dev->nr_guest_pages > 0) { > last_page = &dev->guest_pages[dev->nr_guest_pages - 1]; > /* merge if the two pages are continuous */ > - if (host_iova == last_page->host_iova + > - last_page->size) { > + if (host_iova == last_page->host_iova + last_page->size > + && guest_phys_addr == last_page->guest_phys_addr + last_page->size > + && host_user_addr == last_page->host_user_addr + last_page->size) { Indentation looks wrong. > last_page->size += size; > return 0; > } > @@ -1009,6 +1009,7 @@ add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, > page = &dev->guest_pages[dev->nr_guest_pages++]; > page->guest_phys_addr = guest_phys_addr; > page->host_iova = host_iova; > + page->host_user_addr = host_user_addr; > page->size = size; > > return 0; > @@ -1028,7 +1029,8 @@ add_guest_pages(struct virtio_net *dev, struct rte_vhost_mem_region *reg, > size = page_size - (guest_phys_addr & (page_size - 1)); > size = RTE_MIN(size, reg_size); > > - if (add_one_guest_page(dev, guest_phys_addr, host_iova, size) < 0) > + if (add_one_guest_page(dev, guest_phys_addr, host_iova, > + host_user_addr, size) < 0) > return -1; > > host_user_addr += size; > @@ -1040,7 +1042,7 @@ add_guest_pages(struct virtio_net *dev, struct rte_vhost_mem_region *reg, > host_iova = rte_mem_virt2iova((void *)(uintptr_t) > host_user_addr); > if (add_one_guest_page(dev, guest_phys_addr, host_iova, > - size) < 0) > + host_user_addr, size) < 0) > return -1; > > host_user_addr += size; > @@ -1215,7 +1217,6 @@ vhost_user_mmap_region(struct virtio_net *dev, > uint64_t mmap_size; > uint64_t alignment; > int populate; > - int ret; > > /* Check for memory_size + mmap_offset overflow */ > if (mmap_offset >= -region->size) { > @@ -1274,14 +1275,6 @@ vhost_user_mmap_region(struct virtio_net *dev, > VHOST_LOG_CONFIG(ERR, "adding guest pages to region failed.\n"); > return -1; > } > - > - if (rte_vfio_is_enabled("vfio")) { > - ret = async_dma_map(region, true); > - if (ret) { > - VHOST_LOG_CONFIG(ERR, "Configure IOMMU for DMA engine failed\n"); > - return -1; > - } > - } > } > > VHOST_LOG_CONFIG(INFO, > @@ -1420,6 +1413,9 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, > dev->mem->nregions++; > } > > + if (dev->async_copy && rte_vfio_is_enabled("vfio")) > + async_dma_map(dev, true); > + > if (vhost_user_postcopy_register(dev, main_fd, msg) < 0) > goto free_mem_table; > Overall, the patch looks good. Please fix the small nits & add Fixes tag and cc stable on patch 1. Thanks, Maxime ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] vhost: fix physical address mapping 2022-01-19 15:10 ` [PATCH v2 2/2] vhost: fix physical address mapping xuan.ding 2022-02-01 8:51 ` Maxime Coquelin @ 2022-02-04 10:43 ` Maxime Coquelin 1 sibling, 0 replies; 11+ messages in thread From: Maxime Coquelin @ 2022-02-04 10:43 UTC (permalink / raw) To: xuan.ding, chenbo.xia; +Cc: dev, jiayu.hu, yuanx.wang On 1/19/22 16:10, xuan.ding@intel.com wrote: > From: Xuan Ding <xuan.ding@intel.com> > > When choosing IOVA as PA mode, IOVA is likely to be discontinuous, > which requires page by page mapping for DMA devices. To be consistent, > this patch implements page by page mapping instead of mapping at the > region granularity for both IOVA as VA and PA mode. > > Fixes: 7c61fa08b716 ("vhost: enable IOMMU for async vhost") > > Signed-off-by: Xuan Ding <xuan.ding@intel.com> > Signed-off-by: Yuan Wang <yuanx.wang@intel.com> > --- > lib/vhost/vhost.h | 1 + > lib/vhost/vhost_user.c | 116 ++++++++++++++++++++--------------------- > 2 files changed, 57 insertions(+), 60 deletions(-) > Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com> Thanks, Maxime ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 0/2] vhost: fix async address mapping 2022-01-19 15:10 [PATCH v2 0/2] vhost: fix async address mapping xuan.ding 2022-01-19 15:10 ` [PATCH v2 1/2] vhost: rename field in guest page struct xuan.ding 2022-01-19 15:10 ` [PATCH v2 2/2] vhost: fix physical address mapping xuan.ding @ 2022-02-01 8:28 ` Maxime Coquelin 2022-02-01 11:29 ` Kevin Traynor 2022-02-04 10:56 ` Maxime Coquelin 3 siblings, 1 reply; 11+ messages in thread From: Maxime Coquelin @ 2022-02-01 8:28 UTC (permalink / raw) To: xuan.ding, chenbo.xia; +Cc: dev, jiayu.hu, yuanx.wang On 1/19/22 16:10, xuan.ding@intel.com wrote: > From: Xuan Ding <xuan.ding@intel.com> > > This patchset fixes the issue of incorrect DMA mapping in PA mode. > Due to the ambiguity of host_phys_addr naming in the guest page > struct, rename it to host_iova. > > v2: > * Change the order of patch. I'm not sure why you changed the order of the patches. Now, the second one is the fix, so it will make the backport more difficult. Either both are considered to be fixes. I think it can make sense as the renaming does not introduce risk of regression and will make backporting patches easier in the future. Other solution is to reverse the order again, but I think tagging the renaming as a fix is OK for me here. What do you think? Regards, Maxime > > Xuan Ding (2): > vhost: rename field in guest page struct > vhost: fix physical address mapping > > lib/vhost/vhost.h | 11 ++-- > lib/vhost/vhost_user.c | 130 ++++++++++++++++++++--------------------- > lib/vhost/virtio_net.c | 11 ++-- > 3 files changed, 75 insertions(+), 77 deletions(-) > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 0/2] vhost: fix async address mapping 2022-02-01 8:28 ` [PATCH v2 0/2] vhost: fix async " Maxime Coquelin @ 2022-02-01 11:29 ` Kevin Traynor 2022-02-04 10:43 ` Maxime Coquelin 0 siblings, 1 reply; 11+ messages in thread From: Kevin Traynor @ 2022-02-01 11:29 UTC (permalink / raw) To: Maxime Coquelin, xuan.ding, chenbo.xia; +Cc: dev, jiayu.hu, yuanx.wang On 01/02/2022 08:28, Maxime Coquelin wrote: > > > On 1/19/22 16:10, xuan.ding@intel.com wrote: >> From: Xuan Ding <xuan.ding@intel.com> >> >> This patchset fixes the issue of incorrect DMA mapping in PA mode. >> Due to the ambiguity of host_phys_addr naming in the guest page >> struct, rename it to host_iova. >> >> v2: >> * Change the order of patch. > > I'm not sure why you changed the order of the patches. > Now, the second one is the fix, so it will make the backport more > difficult. Either both are considered to be fixes. I think it can make > sense as the renaming does not introduce risk of regression and will > make backporting patches easier in the future. > > Other solution is to reverse the order again, but I think tagging the > renaming as a fix is OK for me here. > > What do you think? > Either way sounds ok to me, but can you also add stable tag(s). There isn't a stable tag on either patch at present. Thanks. > Regards, > Maxime > >> >> Xuan Ding (2): >> vhost: rename field in guest page struct >> vhost: fix physical address mapping >> >> lib/vhost/vhost.h | 11 ++-- >> lib/vhost/vhost_user.c | 130 ++++++++++++++++++++--------------------- >> lib/vhost/virtio_net.c | 11 ++-- >> 3 files changed, 75 insertions(+), 77 deletions(-) >> > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 0/2] vhost: fix async address mapping 2022-02-01 11:29 ` Kevin Traynor @ 2022-02-04 10:43 ` Maxime Coquelin 0 siblings, 0 replies; 11+ messages in thread From: Maxime Coquelin @ 2022-02-04 10:43 UTC (permalink / raw) To: Kevin Traynor, xuan.ding, chenbo.xia; +Cc: dev, jiayu.hu, yuanx.wang On 2/1/22 12:29, Kevin Traynor wrote: > On 01/02/2022 08:28, Maxime Coquelin wrote: >> >> >> On 1/19/22 16:10, xuan.ding@intel.com wrote: >>> From: Xuan Ding <xuan.ding@intel.com> >>> >>> This patchset fixes the issue of incorrect DMA mapping in PA mode. >>> Due to the ambiguity of host_phys_addr naming in the guest page >>> struct, rename it to host_iova. >>> >>> v2: >>> * Change the order of patch. >> >> I'm not sure why you changed the order of the patches. >> Now, the second one is the fix, so it will make the backport more >> difficult. Either both are considered to be fixes. I think it can make >> sense as the renaming does not introduce risk of regression and will >> make backporting patches easier in the future. >> >> Other solution is to reverse the order again, but I think tagging the >> renaming as a fix is OK for me here. >> >> What do you think? >> > > Either way sounds ok to me, but can you also add stable tag(s). There > isn't a stable tag on either patch at present. Thanks. OK, will do. Thanks Kevin, Maxime >> Regards, >> Maxime >> >>> >>> Xuan Ding (2): >>> vhost: rename field in guest page struct >>> vhost: fix physical address mapping >>> >>> lib/vhost/vhost.h | 11 ++-- >>> lib/vhost/vhost_user.c | 130 >>> ++++++++++++++++++++--------------------- >>> lib/vhost/virtio_net.c | 11 ++-- >>> 3 files changed, 75 insertions(+), 77 deletions(-) >>> >> > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 0/2] vhost: fix async address mapping 2022-01-19 15:10 [PATCH v2 0/2] vhost: fix async address mapping xuan.ding ` (2 preceding siblings ...) 2022-02-01 8:28 ` [PATCH v2 0/2] vhost: fix async " Maxime Coquelin @ 2022-02-04 10:56 ` Maxime Coquelin 2022-02-11 2:48 ` Ding, Xuan 3 siblings, 1 reply; 11+ messages in thread From: Maxime Coquelin @ 2022-02-04 10:56 UTC (permalink / raw) To: xuan.ding, chenbo.xia; +Cc: dev, jiayu.hu, yuanx.wang On 1/19/22 16:10, xuan.ding@intel.com wrote: > From: Xuan Ding <xuan.ding@intel.com> > > This patchset fixes the issue of incorrect DMA mapping in PA mode. > Due to the ambiguity of host_phys_addr naming in the guest page > struct, rename it to host_iova. > > v2: > * Change the order of patch. > > Xuan Ding (2): > vhost: rename field in guest page struct > vhost: fix physical address mapping > > lib/vhost/vhost.h | 11 ++-- > lib/vhost/vhost_user.c | 130 ++++++++++++++++++++--------------------- > lib/vhost/virtio_net.c | 11 ++-- > 3 files changed, 75 insertions(+), 77 deletions(-) > I was willing to apply the series, but it does not apply. Could you please rebase with taking our comments into account? Thanks, Maxime ^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [PATCH v2 0/2] vhost: fix async address mapping 2022-02-04 10:56 ` Maxime Coquelin @ 2022-02-11 2:48 ` Ding, Xuan 0 siblings, 0 replies; 11+ messages in thread From: Ding, Xuan @ 2022-02-11 2:48 UTC (permalink / raw) To: Maxime Coquelin, Xia, Chenbo; +Cc: dev, Hu, Jiayu, Wang, YuanX, Kevin Traynor Hi Maxime & Kevin, > -----Original Message----- > From: Maxime Coquelin <maxime.coquelin@redhat.com> > Sent: 2022年2月4日 18:57 > To: Ding, Xuan <xuan.ding@intel.com>; Xia, Chenbo <chenbo.xia@intel.com> > Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; Wang, YuanX > <yuanx.wang@intel.com> > Subject: Re: [PATCH v2 0/2] vhost: fix async address mapping > > > > On 1/19/22 16:10, xuan.ding@intel.com wrote: > > From: Xuan Ding <xuan.ding@intel.com> > > > > This patchset fixes the issue of incorrect DMA mapping in PA mode. > > Due to the ambiguity of host_phys_addr naming in the guest page > > struct, rename it to host_iova. > > > > v2: > > * Change the order of patch. The consideration of changing the order here is to avoid the fix patch using the previous variable name(host_phys_addr), so rename the variable first. > > > > Xuan Ding (2): > > vhost: rename field in guest page struct > > vhost: fix physical address mapping > > > > lib/vhost/vhost.h | 11 ++-- > > lib/vhost/vhost_user.c | 130 ++++++++++++++++++++--------------------- > > lib/vhost/virtio_net.c | 11 ++-- > > 3 files changed, 75 insertions(+), 77 deletions(-) > > > > I was willing to apply the series, but it does not apply. > Could you please rebase with taking our comments into account? Thanks for your comments, I will send a new patch set applied with your comments. Regards, Xuan > > Thanks, > Maxime ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2022-02-11 2:48 UTC | newest] Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-01-19 15:10 [PATCH v2 0/2] vhost: fix async address mapping xuan.ding 2022-01-19 15:10 ` [PATCH v2 1/2] vhost: rename field in guest page struct xuan.ding 2022-02-01 8:47 ` Maxime Coquelin 2022-01-19 15:10 ` [PATCH v2 2/2] vhost: fix physical address mapping xuan.ding 2022-02-01 8:51 ` Maxime Coquelin 2022-02-04 10:43 ` Maxime Coquelin 2022-02-01 8:28 ` [PATCH v2 0/2] vhost: fix async " Maxime Coquelin 2022-02-01 11:29 ` Kevin Traynor 2022-02-04 10:43 ` Maxime Coquelin 2022-02-04 10:56 ` Maxime Coquelin 2022-02-11 2:48 ` Ding, Xuan
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).