From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 8CCC81AEE9 for ; Thu, 5 Oct 2017 10:42:55 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D48ED3E2A1; Thu, 5 Oct 2017 08:42:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com D48ED3E2A1 Authentication-Results: ext-mx06.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx06.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=maxime.coquelin@redhat.com Received: from localhost.localdomain (ovpn-112-45.ams2.redhat.com [10.36.112.45]) by smtp.corp.redhat.com (Postfix) with ESMTP id ADF1C6253B; Thu, 5 Oct 2017 08:42:40 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, remy.horton@intel.com, tiwei.bie@intel.com, yliu@fridaylinux.org Cc: mst@redhat.com, jfreiman@redhat.com, vkaplans@redhat.com, jasowang@redhat.com, Maxime Coquelin Date: Thu, 5 Oct 2017 10:36:20 +0200 Message-Id: <20171005083627.27828-13-maxime.coquelin@redhat.com> In-Reply-To: <20171005083627.27828-1-maxime.coquelin@redhat.com> References: <20171005083627.27828-1-maxime.coquelin@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Thu, 05 Oct 2017 08:42:55 +0000 (UTC) Subject: [dpdk-dev] [PATCH v3 12/19] vhost: use the guest IOVA to host VA helper X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Oct 2017 08:42:55 -0000 Replace rte_vhost_gpa_to_vva() calls with vhost_iova_to_vva(), which requires to also pass the mapped len and the access permissions needed. Signed-off-by: Maxime Coquelin --- lib/librte_vhost/virtio_net.c | 71 +++++++++++++++++++++++++++++++++++-------- 1 file changed, 58 insertions(+), 13 deletions(-) diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 59ff6c875..cdfb6f957 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -45,6 +45,7 @@ #include #include +#include "iotlb.h" #include "vhost.h" #define MAX_PKT_BURST 32 @@ -211,7 +212,8 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, int error = 0; desc = &descs[desc_idx]; - desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr); + desc_addr = vhost_iova_to_vva(dev, vq, desc->addr, + desc->len, VHOST_ACCESS_RW); /* * Checking of 'desc_addr' placed outside of 'unlikely' macro to avoid * performance issue with some versions of gcc (4.8.4 and 5.3.0) which @@ -255,7 +257,9 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, } desc = &descs[desc->next]; - desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr); + desc_addr = vhost_iova_to_vva(dev, vq, desc->addr, + desc->len, + VHOST_ACCESS_RW); if (unlikely(!desc_addr)) { error = -1; goto out; @@ -352,14 +356,20 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, } rte_prefetch0(&vq->desc[desc_indexes[0]]); + + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) + vhost_user_iotlb_rd_lock(vq); + for (i = 0; i < count; i++) { uint16_t desc_idx = desc_indexes[i]; int err; if (vq->desc[desc_idx].flags & VRING_DESC_F_INDIRECT) { descs = (struct vring_desc *)(uintptr_t) - rte_vhost_gpa_to_vva(dev->mem, - vq->desc[desc_idx].addr); + vhost_iova_to_vva(dev, + vq, vq->desc[desc_idx].addr, + vq->desc[desc_idx].len, + VHOST_ACCESS_RO); if (unlikely(!descs)) { count = i; break; @@ -384,6 +394,9 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, do_data_copy_enqueue(dev, vq); + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) + vhost_user_iotlb_rd_unlock(vq); + rte_smp_wmb(); *(volatile uint16_t *)&vq->used->idx += count; @@ -417,7 +430,9 @@ fill_vec_buf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (vq->desc[idx].flags & VRING_DESC_F_INDIRECT) { descs = (struct vring_desc *)(uintptr_t) - rte_vhost_gpa_to_vva(dev->mem, vq->desc[idx].addr); + vhost_iova_to_vva(dev, vq, vq->desc[idx].addr, + vq->desc[idx].len, + VHOST_ACCESS_RO); if (unlikely(!descs)) return -1; @@ -512,7 +527,9 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq, goto out; } - desc_addr = rte_vhost_gpa_to_vva(dev->mem, buf_vec[vec_idx].buf_addr); + desc_addr = vhost_iova_to_vva(dev, vq, buf_vec[vec_idx].buf_addr, + buf_vec[vec_idx].buf_len, + VHOST_ACCESS_RW); if (buf_vec[vec_idx].buf_len < dev->vhost_hlen || !desc_addr) { error = -1; goto out; @@ -535,8 +552,11 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq, /* done with current desc buf, get the next one */ if (desc_avail == 0) { vec_idx++; - desc_addr = rte_vhost_gpa_to_vva(dev->mem, - buf_vec[vec_idx].buf_addr); + desc_addr = + vhost_iova_to_vva(dev, vq, + buf_vec[vec_idx].buf_addr, + buf_vec[vec_idx].buf_len, + VHOST_ACCESS_RW); if (unlikely(!desc_addr)) { error = -1; goto out; @@ -637,6 +657,10 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, vq->shadow_used_idx = 0; avail_head = *((volatile uint16_t *)&vq->avail->idx); + + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) + vhost_user_iotlb_rd_lock(vq); + for (pkt_idx = 0; pkt_idx < count; pkt_idx++) { uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen; @@ -665,6 +689,9 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, do_data_copy_enqueue(dev, vq); + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) + vhost_user_iotlb_rd_unlock(vq); + if (likely(vq->shadow_used_idx)) { flush_shadow_used_ring(dev, vq); @@ -875,7 +902,10 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, goto out; } - desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr); + desc_addr = vhost_iova_to_vva(dev, + vq, desc->addr, + desc->len, + VHOST_ACCESS_RO); if (unlikely(!desc_addr)) { error = -1; goto out; @@ -899,7 +929,10 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, goto out; } - desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr); + desc_addr = vhost_iova_to_vva(dev, + vq, desc->addr, + desc->len, + VHOST_ACCESS_RO); if (unlikely(!desc_addr)) { error = -1; goto out; @@ -982,7 +1015,10 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, goto out; } - desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr); + desc_addr = vhost_iova_to_vva(dev, + vq, desc->addr, + desc->len, + VHOST_ACCESS_RO); if (unlikely(!desc_addr)) { error = -1; goto out; @@ -1226,6 +1262,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, /* Prefetch descriptor index. */ rte_prefetch0(&vq->desc[desc_indexes[0]]); + + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) + vhost_user_iotlb_rd_lock(vq); + for (i = 0; i < count; i++) { struct vring_desc *desc; uint16_t sz, idx; @@ -1236,8 +1276,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, if (vq->desc[desc_indexes[i]].flags & VRING_DESC_F_INDIRECT) { desc = (struct vring_desc *)(uintptr_t) - rte_vhost_gpa_to_vva(dev->mem, - vq->desc[desc_indexes[i]].addr); + vhost_iova_to_vva(dev, vq, + vq->desc[desc_indexes[i]].addr, + sizeof(*desc), + VHOST_ACCESS_RO); if (unlikely(!desc)) break; @@ -1287,6 +1329,9 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, TAILQ_INSERT_TAIL(&vq->zmbuf_list, zmbuf, next); } } + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) + vhost_user_iotlb_rd_unlock(vq); + vq->last_avail_idx += i; if (likely(dev->dequeue_zero_copy == 0)) { -- 2.13.6