DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
@ 2018-04-23 15:58 Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 01/12] vhost: fix indirect descriptors table translation size Maxime Coquelin
                   ` (12 more replies)
  0 siblings, 13 replies; 18+ messages in thread
From: Maxime Coquelin @ 2018-04-23 15:58 UTC (permalink / raw)
  To: dev; +Cc: Maxime Coquelin

This series fixes the security vulnerability referenced
as CVE-2018-1059.

Patches are already applied to the branch, but reviews
are encouraged. Any issues spotted would be fixed on top.

Maxime Coquelin (12):
  vhost: fix indirect descriptors table translation size
  vhost: check all range is mapped when translating GPAs
  vhost: introduce safe API for GPA translation
  vhost: ensure all range is mapped when translating QVAs
  vhost: add support for non-contiguous indirect descs tables
  vhost: handle virtually non-contiguous buffers in Tx
  vhost: handle virtually non-contiguous buffers in Rx
  vhost: handle virtually non-contiguous buffers in Rx-mrg
  examples/vhost: move to safe GPA translation API
  examples/vhost_scsi: move to safe GPA translation API
  vhost/crypto: move to safe GPA translation API
  vhost: deprecate unsafe GPA translation API

 examples/vhost/virtio_net.c            |  94 +++++++-
 examples/vhost_scsi/vhost_scsi.c       |  56 ++++-
 lib/librte_vhost/rte_vhost.h           |  46 ++++
 lib/librte_vhost/rte_vhost_version.map |   4 +-
 lib/librte_vhost/vhost.c               |  39 ++--
 lib/librte_vhost/vhost.h               |   8 +-
 lib/librte_vhost/vhost_crypto.c        |  65 ++++--
 lib/librte_vhost/vhost_user.c          |  58 +++--
 lib/librte_vhost/virtio_net.c          | 411 ++++++++++++++++++++++++++++-----
 9 files changed, 650 insertions(+), 131 deletions(-)

-- 
2.14.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH 01/12] vhost: fix indirect descriptors table translation size
  2018-04-23 15:58 [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Maxime Coquelin
@ 2018-04-23 15:58 ` Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 02/12] vhost: check all range is mapped when translating GPAs Maxime Coquelin
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Maxime Coquelin @ 2018-04-23 15:58 UTC (permalink / raw)
  To: dev; +Cc: Maxime Coquelin

This patch fixes the size passed at the indirect descriptor
table translation time, which is the len field of the descriptor,
and not a single descriptor.

This issue has been assigned CVE-2018-1059.

Fixes: 62fdb8255ae7 ("vhost: use the guest IOVA to host VA helper")

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index ed7198dbb..108f4deff 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -1261,7 +1261,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 			desc = (struct vring_desc *)(uintptr_t)
 				vhost_iova_to_vva(dev, vq,
 						vq->desc[desc_indexes[i]].addr,
-						sizeof(*desc),
+						vq->desc[desc_indexes[i]].len,
 						VHOST_ACCESS_RO);
 			if (unlikely(!desc))
 				break;
-- 
2.14.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH 02/12] vhost: check all range is mapped when translating GPAs
  2018-04-23 15:58 [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 01/12] vhost: fix indirect descriptors table translation size Maxime Coquelin
@ 2018-04-23 15:58 ` Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 03/12] vhost: introduce safe API for GPA translation Maxime Coquelin
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Maxime Coquelin @ 2018-04-23 15:58 UTC (permalink / raw)
  To: dev; +Cc: Maxime Coquelin

There is currently no check done on the length when translating
guest addresses into host virtual addresses. Also, there is no
guanrantee that the guest addresses range is contiguous in
the host virtual address space.

This patch prepares vhost_iova_to_vva() and its callers to
return and check the mapped size. If the mapped size is smaller
than the requested size, the caller handle it as an error.

This issue has been assigned CVE-2018-1059.

Reported-by: Yongji Xie <xieyongji@baidu.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost.c      | 39 +++++++++++++++-----------
 lib/librte_vhost/vhost.h      |  6 ++--
 lib/librte_vhost/virtio_net.c | 64 +++++++++++++++++++++++++++----------------
 3 files changed, 67 insertions(+), 42 deletions(-)

diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index 5ddf55ed9..afded4952 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -29,17 +29,17 @@ struct virtio_net *vhost_devices[MAX_VHOST_DEVICE];
 /* Called with iotlb_lock read-locked */
 uint64_t
 __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq,
-		    uint64_t iova, uint64_t size, uint8_t perm)
+		    uint64_t iova, uint64_t *size, uint8_t perm)
 {
 	uint64_t vva, tmp_size;
 
-	if (unlikely(!size))
+	if (unlikely(!*size))
 		return 0;
 
-	tmp_size = size;
+	tmp_size = *size;
 
 	vva = vhost_user_iotlb_cache_find(vq, iova, &tmp_size, perm);
-	if (tmp_size == size)
+	if (tmp_size == *size)
 		return vva;
 
 	iova += tmp_size;
@@ -118,32 +118,39 @@ free_device(struct virtio_net *dev)
 int
 vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq)
 {
-	uint64_t size;
+	uint64_t req_size, size;
 
 	if (!(dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)))
 		goto out;
 
-	size = sizeof(struct vring_desc) * vq->size;
+	req_size = sizeof(struct vring_desc) * vq->size;
+	size = req_size;
 	vq->desc = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, vq,
 						vq->ring_addrs.desc_user_addr,
-						size, VHOST_ACCESS_RW);
-	if (!vq->desc)
+						&size, VHOST_ACCESS_RW);
+	if (!vq->desc || size != req_size)
 		return -1;
 
-	size = sizeof(struct vring_avail);
-	size += sizeof(uint16_t) * vq->size;
+	req_size = sizeof(struct vring_avail);
+	req_size += sizeof(uint16_t) * vq->size;
+	if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX))
+		req_size += sizeof(uint16_t);
+	size = req_size;
 	vq->avail = (struct vring_avail *)(uintptr_t)vhost_iova_to_vva(dev, vq,
 						vq->ring_addrs.avail_user_addr,
-						size, VHOST_ACCESS_RW);
-	if (!vq->avail)
+						&size, VHOST_ACCESS_RW);
+	if (!vq->avail || size != req_size)
 		return -1;
 
-	size = sizeof(struct vring_used);
-	size += sizeof(struct vring_used_elem) * vq->size;
+	req_size = sizeof(struct vring_used);
+	req_size += sizeof(struct vring_used_elem) * vq->size;
+	if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX))
+		req_size += sizeof(uint16_t);
+	size = req_size;
 	vq->used = (struct vring_used *)(uintptr_t)vhost_iova_to_vva(dev, vq,
 						vq->ring_addrs.used_user_addr,
-						size, VHOST_ACCESS_RW);
-	if (!vq->used)
+						&size, VHOST_ACCESS_RW);
+	if (!vq->used || size != req_size)
 		return -1;
 
 out:
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index c9b64461d..f7dbd2c94 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -437,18 +437,18 @@ struct vhost_device_ops const *vhost_driver_callback_get(const char *path);
 void vhost_backend_cleanup(struct virtio_net *dev);
 
 uint64_t __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq,
-			uint64_t iova, uint64_t size, uint8_t perm);
+			uint64_t iova, uint64_t *len, uint8_t perm);
 int vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq);
 void vring_invalidate(struct virtio_net *dev, struct vhost_virtqueue *vq);
 
 static __rte_always_inline uint64_t
 vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq,
-			uint64_t iova, uint64_t size, uint8_t perm)
+			uint64_t iova, uint64_t *len, uint8_t perm)
 {
 	if (!(dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)))
 		return rte_vhost_gpa_to_vva(dev->mem, iova);
 
-	return __vhost_iova_to_vva(dev, vq, iova, size, perm);
+	return __vhost_iova_to_vva(dev, vq, iova, len, perm);
 }
 
 #define vhost_used_event(vr) \
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 108f4deff..2be3e7a7e 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -180,6 +180,7 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	uint32_t desc_avail, desc_offset;
 	uint32_t mbuf_avail, mbuf_offset;
 	uint32_t cpy_len;
+	uint64_t dlen;
 	struct vring_desc *desc;
 	uint64_t desc_addr;
 	/* A counter to avoid desc dead loop chain */
@@ -189,14 +190,16 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	int error = 0;
 
 	desc = &descs[desc_idx];
+	dlen = desc->len;
 	desc_addr = vhost_iova_to_vva(dev, vq, desc->addr,
-					desc->len, VHOST_ACCESS_RW);
+					&dlen, VHOST_ACCESS_RW);
 	/*
 	 * Checking of 'desc_addr' placed outside of 'unlikely' macro to avoid
 	 * performance issue with some versions of gcc (4.8.4 and 5.3.0) which
 	 * otherwise stores offset on the stack instead of in a register.
 	 */
-	if (unlikely(desc->len < dev->vhost_hlen) || !desc_addr) {
+	if (unlikely(dlen != desc->len || desc->len < dev->vhost_hlen) ||
+			!desc_addr) {
 		error = -1;
 		goto out;
 	}
@@ -234,10 +237,11 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			}
 
 			desc = &descs[desc->next];
+			dlen = desc->len;
 			desc_addr = vhost_iova_to_vva(dev, vq, desc->addr,
-							desc->len,
+							&dlen,
 							VHOST_ACCESS_RW);
-			if (unlikely(!desc_addr)) {
+			if (unlikely(!desc_addr || dlen != desc->len)) {
 				error = -1;
 				goto out;
 			}
@@ -351,12 +355,13 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 		int err;
 
 		if (vq->desc[desc_idx].flags & VRING_DESC_F_INDIRECT) {
+			uint64_t dlen = vq->desc[desc_idx].len;
 			descs = (struct vring_desc *)(uintptr_t)
 				vhost_iova_to_vva(dev,
 						vq, vq->desc[desc_idx].addr,
-						vq->desc[desc_idx].len,
-						VHOST_ACCESS_RO);
-			if (unlikely(!descs)) {
+						&dlen, VHOST_ACCESS_RO);
+			if (unlikely(!descs ||
+					dlen != vq->desc[desc_idx].len)) {
 				count = i;
 				break;
 			}
@@ -408,16 +413,18 @@ fill_vec_buf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	uint16_t idx = vq->avail->ring[avail_idx & (vq->size - 1)];
 	uint32_t vec_id = *vec_idx;
 	uint32_t len    = 0;
+	uint64_t dlen;
 	struct vring_desc *descs = vq->desc;
 
 	*desc_chain_head = idx;
 
 	if (vq->desc[idx].flags & VRING_DESC_F_INDIRECT) {
+		dlen = vq->desc[idx].len;
 		descs = (struct vring_desc *)(uintptr_t)
 			vhost_iova_to_vva(dev, vq, vq->desc[idx].addr,
-						vq->desc[idx].len,
+						&dlen,
 						VHOST_ACCESS_RO);
-		if (unlikely(!descs))
+		if (unlikely(!descs || dlen != vq->desc[idx].len))
 			return -1;
 
 		idx = 0;
@@ -500,6 +507,7 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	uint32_t mbuf_offset, mbuf_avail;
 	uint32_t desc_offset, desc_avail;
 	uint32_t cpy_len;
+	uint64_t dlen;
 	uint64_t hdr_addr, hdr_phys_addr;
 	struct rte_mbuf *hdr_mbuf;
 	struct batch_copy_elem *batch_copy = vq->batch_copy_elems;
@@ -511,10 +519,12 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		goto out;
 	}
 
+	dlen = buf_vec[vec_idx].buf_len;
 	desc_addr = vhost_iova_to_vva(dev, vq, buf_vec[vec_idx].buf_addr,
-						buf_vec[vec_idx].buf_len,
-						VHOST_ACCESS_RW);
-	if (buf_vec[vec_idx].buf_len < dev->vhost_hlen || !desc_addr) {
+						&dlen, VHOST_ACCESS_RW);
+	if (dlen != buf_vec[vec_idx].buf_len ||
+			buf_vec[vec_idx].buf_len < dev->vhost_hlen ||
+			!desc_addr) {
 		error = -1;
 		goto out;
 	}
@@ -536,12 +546,14 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		/* done with current desc buf, get the next one */
 		if (desc_avail == 0) {
 			vec_idx++;
+			dlen = buf_vec[vec_idx].buf_len;
 			desc_addr =
 				vhost_iova_to_vva(dev, vq,
 					buf_vec[vec_idx].buf_addr,
-					buf_vec[vec_idx].buf_len,
+					&dlen,
 					VHOST_ACCESS_RW);
-			if (unlikely(!desc_addr)) {
+			if (unlikely(!desc_addr ||
+					dlen != buf_vec[vec_idx].buf_len)) {
 				error = -1;
 				goto out;
 			}
@@ -847,6 +859,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	uint32_t desc_avail, desc_offset;
 	uint32_t mbuf_avail, mbuf_offset;
 	uint32_t cpy_len;
+	uint64_t dlen;
 	struct rte_mbuf *cur = m, *prev = m;
 	struct virtio_net_hdr *hdr = NULL;
 	/* A counter to avoid desc dead loop chain */
@@ -862,11 +875,12 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		goto out;
 	}
 
+	dlen = desc->len;
 	desc_addr = vhost_iova_to_vva(dev,
 					vq, desc->addr,
-					desc->len,
+					&dlen,
 					VHOST_ACCESS_RO);
-	if (unlikely(!desc_addr)) {
+	if (unlikely(!desc_addr || dlen != desc->len)) {
 		error = -1;
 		goto out;
 	}
@@ -889,11 +903,12 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			goto out;
 		}
 
+		dlen = desc->len;
 		desc_addr = vhost_iova_to_vva(dev,
 							vq, desc->addr,
-							desc->len,
+							&dlen,
 							VHOST_ACCESS_RO);
-		if (unlikely(!desc_addr)) {
+		if (unlikely(!desc_addr || dlen != desc->len)) {
 			error = -1;
 			goto out;
 		}
@@ -977,11 +992,11 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 				goto out;
 			}
 
+			dlen = desc->len;
 			desc_addr = vhost_iova_to_vva(dev,
 							vq, desc->addr,
-							desc->len,
-							VHOST_ACCESS_RO);
-			if (unlikely(!desc_addr)) {
+							&dlen, VHOST_ACCESS_RO);
+			if (unlikely(!desc_addr || dlen != desc->len)) {
 				error = -1;
 				goto out;
 			}
@@ -1252,18 +1267,21 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	for (i = 0; i < count; i++) {
 		struct vring_desc *desc;
 		uint16_t sz, idx;
+		uint64_t dlen;
 		int err;
 
 		if (likely(i + 1 < count))
 			rte_prefetch0(&vq->desc[desc_indexes[i + 1]]);
 
 		if (vq->desc[desc_indexes[i]].flags & VRING_DESC_F_INDIRECT) {
+			dlen = vq->desc[desc_indexes[i]].len;
 			desc = (struct vring_desc *)(uintptr_t)
 				vhost_iova_to_vva(dev, vq,
 						vq->desc[desc_indexes[i]].addr,
-						vq->desc[desc_indexes[i]].len,
+						&dlen,
 						VHOST_ACCESS_RO);
-			if (unlikely(!desc))
+			if (unlikely(!desc ||
+					dlen != vq->desc[desc_indexes[i]].len))
 				break;
 
 			rte_prefetch0(desc);
-- 
2.14.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH 03/12] vhost: introduce safe API for GPA translation
  2018-04-23 15:58 [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 01/12] vhost: fix indirect descriptors table translation size Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 02/12] vhost: check all range is mapped when translating GPAs Maxime Coquelin
@ 2018-04-23 15:58 ` Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 04/12] vhost: ensure all range is mapped when translating QVAs Maxime Coquelin
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Maxime Coquelin @ 2018-04-23 15:58 UTC (permalink / raw)
  To: dev; +Cc: Maxime Coquelin

This new rte_vhost_va_from_guest_pa API takes an extra len
parameter, used to specify the size of the range to be mapped.
Effective mapped range is returned via len parameter.

This issue has been assigned CVE-2018-1059.

Reported-by: Yongji Xie <xieyongji@baidu.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/rte_vhost.h           | 40 ++++++++++++++++++++++++++++++++++
 lib/librte_vhost/rte_vhost_version.map |  4 +++-
 lib/librte_vhost/vhost.h               |  2 +-
 3 files changed, 44 insertions(+), 2 deletions(-)

diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
index e4e8824c9..7d065137a 100644
--- a/lib/librte_vhost/rte_vhost.h
+++ b/lib/librte_vhost/rte_vhost.h
@@ -149,6 +149,46 @@ rte_vhost_gpa_to_vva(struct rte_vhost_memory *mem, uint64_t gpa)
 	return 0;
 }
 
+/**
+ * Convert guest physical address to host virtual address safely
+ *
+ * This variant of rte_vhost_gpa_to_vva() takes care all the
+ * requested length is mapped and contiguous in process address
+ * space.
+ *
+ * @param mem
+ *  the guest memory regions
+ * @param gpa
+ *  the guest physical address for querying
+ * @param len
+ *  the size of the requested area to map, updated with actual size mapped
+ * @return
+ *  the host virtual address on success, 0 on failure
+ */
+static __rte_always_inline uint64_t
+rte_vhost_va_from_guest_pa(struct rte_vhost_memory *mem,
+						   uint64_t gpa, uint64_t *len)
+{
+	struct rte_vhost_mem_region *r;
+	uint32_t i;
+
+	for (i = 0; i < mem->nregions; i++) {
+		r = &mem->regions[i];
+		if (gpa >= r->guest_phys_addr &&
+		    gpa <  r->guest_phys_addr + r->size) {
+
+			if (unlikely(*len > r->guest_phys_addr + r->size - gpa))
+				*len = r->guest_phys_addr + r->size - gpa;
+
+			return gpa - r->guest_phys_addr +
+			       r->host_user_addr;
+		}
+	}
+	*len = 0;
+
+	return 0;
+}
+
 #define RTE_VHOST_NEED_LOG(features)	((features) & (1ULL << VHOST_F_LOG_ALL))
 
 /**
diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
index b9d338077..8243bcabf 100644
--- a/lib/librte_vhost/rte_vhost_version.map
+++ b/lib/librte_vhost/rte_vhost_version.map
@@ -61,6 +61,8 @@ DPDK_18.02 {
 } DPDK_17.08;
 
 EXPERIMENTAL {
+	global:
+
 	rte_vdpa_register_device;
 	rte_vdpa_unregister_device;
 	rte_vdpa_find_device_id;
@@ -79,5 +81,5 @@ EXPERIMENTAL {
 	rte_vhost_crypto_fetch_requests;
 	rte_vhost_crypto_finalize_requests;
 	rte_vhost_crypto_set_zero_copy;
-
+	rte_vhost_va_from_guest_pa;
 } DPDK_18.02;
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index f7dbd2c94..ba2fc7404 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -446,7 +446,7 @@ vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			uint64_t iova, uint64_t *len, uint8_t perm)
 {
 	if (!(dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)))
-		return rte_vhost_gpa_to_vva(dev->mem, iova);
+		return rte_vhost_va_from_guest_pa(dev->mem, iova, len);
 
 	return __vhost_iova_to_vva(dev, vq, iova, len, perm);
 }
-- 
2.14.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH 04/12] vhost: ensure all range is mapped when translating QVAs
  2018-04-23 15:58 [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Maxime Coquelin
                   ` (2 preceding siblings ...)
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 03/12] vhost: introduce safe API for GPA translation Maxime Coquelin
@ 2018-04-23 15:58 ` Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 05/12] vhost: add support for non-contiguous indirect descs tables Maxime Coquelin
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Maxime Coquelin @ 2018-04-23 15:58 UTC (permalink / raw)
  To: dev; +Cc: Maxime Coquelin

This patch ensures that all the address range is mapped when
translating addresses from master's addresses (e.g. QEMU host
addressess) to process VAs.

This issue has been assigned CVE-2018-1059.

Reported-by: Yongji Xie <xieyongji@baidu.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost_user.c | 58 +++++++++++++++++++++++++++----------------
 1 file changed, 36 insertions(+), 22 deletions(-)

diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index a3dccf67b..90194bf09 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -422,21 +422,26 @@ numa_realloc(struct virtio_net *dev, int index __rte_unused)
 
 /* Converts QEMU virtual address to Vhost virtual address. */
 static uint64_t
-qva_to_vva(struct virtio_net *dev, uint64_t qva)
+qva_to_vva(struct virtio_net *dev, uint64_t qva, uint64_t *len)
 {
-	struct rte_vhost_mem_region *reg;
+	struct rte_vhost_mem_region *r;
 	uint32_t i;
 
 	/* Find the region where the address lives. */
 	for (i = 0; i < dev->mem->nregions; i++) {
-		reg = &dev->mem->regions[i];
+		r = &dev->mem->regions[i];
+
+		if (qva >= r->guest_user_addr &&
+		    qva <  r->guest_user_addr + r->size) {
+
+			if (unlikely(*len > r->guest_user_addr + r->size - qva))
+				*len = r->guest_user_addr + r->size - qva;
 
-		if (qva >= reg->guest_user_addr &&
-		    qva <  reg->guest_user_addr + reg->size) {
-			return qva - reg->guest_user_addr +
-			       reg->host_user_addr;
+			return qva - r->guest_user_addr +
+			       r->host_user_addr;
 		}
 	}
+	*len = 0;
 
 	return 0;
 }
@@ -449,20 +454,20 @@ qva_to_vva(struct virtio_net *dev, uint64_t qva)
  */
 static uint64_t
 ring_addr_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq,
-		uint64_t ra, uint64_t size)
+		uint64_t ra, uint64_t *size)
 {
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) {
 		uint64_t vva;
 
 		vva = vhost_user_iotlb_cache_find(vq, ra,
-					&size, VHOST_ACCESS_RW);
+					size, VHOST_ACCESS_RW);
 		if (!vva)
 			vhost_user_iotlb_miss(dev, ra, VHOST_ACCESS_RW);
 
 		return vva;
 	}
 
-	return qva_to_vva(dev, ra);
+	return qva_to_vva(dev, ra, size);
 }
 
 static struct virtio_net *
@@ -470,16 +475,18 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
 {
 	struct vhost_virtqueue *vq = dev->virtqueue[vq_index];
 	struct vhost_vring_addr *addr = &vq->ring_addrs;
+	uint64_t len;
 
 	/* The addresses are converted from QEMU virtual to Vhost virtual. */
 	if (vq->desc && vq->avail && vq->used)
 		return dev;
 
+	len = sizeof(struct vring_desc) * vq->size;
 	vq->desc = (struct vring_desc *)(uintptr_t)ring_addr_to_vva(dev,
-			vq, addr->desc_user_addr, sizeof(struct vring_desc));
-	if (vq->desc == 0) {
+			vq, addr->desc_user_addr, &len);
+	if (vq->desc == 0 || len != sizeof(struct vring_desc) * vq->size) {
 		RTE_LOG(DEBUG, VHOST_CONFIG,
-			"(%d) failed to find desc ring address.\n",
+			"(%d) failed to map desc ring.\n",
 			dev->vid);
 		return dev;
 	}
@@ -488,20 +495,26 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
 	vq = dev->virtqueue[vq_index];
 	addr = &vq->ring_addrs;
 
+	len = sizeof(struct vring_avail) + sizeof(uint16_t) * vq->size;
 	vq->avail = (struct vring_avail *)(uintptr_t)ring_addr_to_vva(dev,
-			vq, addr->avail_user_addr, sizeof(struct vring_avail));
-	if (vq->avail == 0) {
+			vq, addr->avail_user_addr, &len);
+	if (vq->avail == 0 ||
+			len != sizeof(struct vring_avail) +
+			sizeof(uint16_t) * vq->size) {
 		RTE_LOG(DEBUG, VHOST_CONFIG,
-			"(%d) failed to find avail ring address.\n",
+			"(%d) failed to map avail ring.\n",
 			dev->vid);
 		return dev;
 	}
 
+	len = sizeof(struct vring_used) +
+		sizeof(struct vring_used_elem) * vq->size;
 	vq->used = (struct vring_used *)(uintptr_t)ring_addr_to_vva(dev,
-			vq, addr->used_user_addr, sizeof(struct vring_used));
-	if (vq->used == 0) {
+			vq, addr->used_user_addr, &len);
+	if (vq->used == 0 || len != sizeof(struct vring_used) +
+			sizeof(struct vring_used_elem) * vq->size) {
 		RTE_LOG(DEBUG, VHOST_CONFIG,
-			"(%d) failed to find used ring address.\n",
+			"(%d) failed to map used ring.\n",
 			dev->vid);
 		return dev;
 	}
@@ -1258,11 +1271,12 @@ vhost_user_iotlb_msg(struct virtio_net **pdev, struct VhostUserMsg *msg)
 	struct virtio_net *dev = *pdev;
 	struct vhost_iotlb_msg *imsg = &msg->payload.iotlb;
 	uint16_t i;
-	uint64_t vva;
+	uint64_t vva, len;
 
 	switch (imsg->type) {
 	case VHOST_IOTLB_UPDATE:
-		vva = qva_to_vva(dev, imsg->uaddr);
+		len = imsg->size;
+		vva = qva_to_vva(dev, imsg->uaddr, &len);
 		if (!vva)
 			return -1;
 
@@ -1270,7 +1284,7 @@ vhost_user_iotlb_msg(struct virtio_net **pdev, struct VhostUserMsg *msg)
 			struct vhost_virtqueue *vq = dev->virtqueue[i];
 
 			vhost_user_iotlb_cache_insert(vq, imsg->iova, vva,
-					imsg->size, imsg->perm);
+					len, imsg->perm);
 
 			if (is_vring_iotlb_update(vq, imsg))
 				*pdev = dev = translate_ring_addresses(dev, i);
-- 
2.14.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH 05/12] vhost: add support for non-contiguous indirect descs tables
  2018-04-23 15:58 [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Maxime Coquelin
                   ` (3 preceding siblings ...)
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 04/12] vhost: ensure all range is mapped when translating QVAs Maxime Coquelin
@ 2018-04-23 15:58 ` Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 06/12] vhost: handle virtually non-contiguous buffers in Tx Maxime Coquelin
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Maxime Coquelin @ 2018-04-23 15:58 UTC (permalink / raw)
  To: dev; +Cc: Maxime Coquelin

This patch adds support for non-contiguous indirect descriptor
tables in VA space.

When it happens, which is unlikely, a table is allocated and the
non-contiguous content is copied into it.

This issue has been assigned CVE-2018-1059.

Reported-by: Yongji Xie <xieyongji@baidu.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 108 +++++++++++++++++++++++++++++++++++++++---
 1 file changed, 101 insertions(+), 7 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 2be3e7a7e..e43df8cb6 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -16,6 +16,7 @@
 #include <rte_sctp.h>
 #include <rte_arp.h>
 #include <rte_spinlock.h>
+#include <rte_malloc.h>
 
 #include "iotlb.h"
 #include "vhost.h"
@@ -30,6 +31,46 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring)
 	return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring;
 }
 
+static __rte_always_inline struct vring_desc *
+alloc_copy_ind_table(struct virtio_net *dev, struct vhost_virtqueue *vq,
+					 struct vring_desc *desc)
+{
+	struct vring_desc *idesc;
+	uint64_t src, dst;
+	uint64_t len, remain = desc->len;
+	uint64_t desc_addr = desc->addr;
+
+	idesc = rte_malloc(__func__, desc->len, 0);
+	if (unlikely(!idesc))
+		return 0;
+
+	dst = (uint64_t)(uintptr_t)idesc;
+
+	while (remain) {
+		len = remain;
+		src = vhost_iova_to_vva(dev, vq, desc_addr, &len,
+				VHOST_ACCESS_RO);
+		if (unlikely(!src || !len)) {
+			rte_free(idesc);
+			return 0;
+		}
+
+		rte_memcpy((void *)(uintptr_t)dst, (void *)(uintptr_t)src, len);
+
+		remain -= len;
+		dst += len;
+		desc_addr += len;
+	}
+
+	return idesc;
+}
+
+static __rte_always_inline void
+free_ind_table(struct vring_desc *idesc)
+{
+	rte_free(idesc);
+}
+
 static __rte_always_inline void
 do_flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			  uint16_t to, uint16_t from, uint16_t size)
@@ -351,6 +392,7 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 
 	rte_prefetch0(&vq->desc[desc_indexes[0]]);
 	for (i = 0; i < count; i++) {
+		struct vring_desc *idesc = NULL;
 		uint16_t desc_idx = desc_indexes[i];
 		int err;
 
@@ -360,12 +402,24 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 				vhost_iova_to_vva(dev,
 						vq, vq->desc[desc_idx].addr,
 						&dlen, VHOST_ACCESS_RO);
-			if (unlikely(!descs ||
-					dlen != vq->desc[desc_idx].len)) {
+			if (unlikely(!descs)) {
 				count = i;
 				break;
 			}
 
+			if (unlikely(dlen < vq->desc[desc_idx].len)) {
+				/*
+				 * The indirect desc table is not contiguous
+				 * in process VA space, we have to copy it.
+				 */
+				idesc = alloc_copy_ind_table(dev, vq,
+							&vq->desc[desc_idx]);
+				if (unlikely(!idesc))
+					break;
+
+				descs = idesc;
+			}
+
 			desc_idx = 0;
 			sz = vq->desc[desc_idx].len / sizeof(*descs);
 		} else {
@@ -376,11 +430,15 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 		err = copy_mbuf_to_desc(dev, vq, descs, pkts[i], desc_idx, sz);
 		if (unlikely(err)) {
 			count = i;
+			free_ind_table(idesc);
 			break;
 		}
 
 		if (i + 1 < count)
 			rte_prefetch0(&vq->desc[desc_indexes[i+1]]);
+
+		if (unlikely(!!idesc))
+			free_ind_table(idesc);
 	}
 
 	do_data_copy_enqueue(dev, vq);
@@ -415,6 +473,7 @@ fill_vec_buf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	uint32_t len    = 0;
 	uint64_t dlen;
 	struct vring_desc *descs = vq->desc;
+	struct vring_desc *idesc = NULL;
 
 	*desc_chain_head = idx;
 
@@ -424,15 +483,29 @@ fill_vec_buf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			vhost_iova_to_vva(dev, vq, vq->desc[idx].addr,
 						&dlen,
 						VHOST_ACCESS_RO);
-		if (unlikely(!descs || dlen != vq->desc[idx].len))
+		if (unlikely(!descs))
 			return -1;
 
+		if (unlikely(dlen < vq->desc[idx].len)) {
+			/*
+			 * The indirect desc table is not contiguous
+			 * in process VA space, we have to copy it.
+			 */
+			idesc = alloc_copy_ind_table(dev, vq, &vq->desc[idx]);
+			if (unlikely(!idesc))
+				return -1;
+
+			descs = idesc;
+		}
+
 		idx = 0;
 	}
 
 	while (1) {
-		if (unlikely(vec_id >= BUF_VECTOR_MAX || idx >= vq->size))
+		if (unlikely(vec_id >= BUF_VECTOR_MAX || idx >= vq->size)) {
+			free_ind_table(idesc);
 			return -1;
+		}
 
 		len += descs[idx].len;
 		buf_vec[vec_id].buf_addr = descs[idx].addr;
@@ -449,6 +522,9 @@ fill_vec_buf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	*desc_chain_len = len;
 	*vec_idx = vec_id;
 
+	if (unlikely(!!idesc))
+		free_ind_table(idesc);
+
 	return 0;
 }
 
@@ -1265,7 +1341,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	/* Prefetch descriptor index. */
 	rte_prefetch0(&vq->desc[desc_indexes[0]]);
 	for (i = 0; i < count; i++) {
-		struct vring_desc *desc;
+		struct vring_desc *desc, *idesc = NULL;
 		uint16_t sz, idx;
 		uint64_t dlen;
 		int err;
@@ -1280,10 +1356,22 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 						vq->desc[desc_indexes[i]].addr,
 						&dlen,
 						VHOST_ACCESS_RO);
-			if (unlikely(!desc ||
-					dlen != vq->desc[desc_indexes[i]].len))
+			if (unlikely(!desc))
 				break;
 
+			if (unlikely(dlen < vq->desc[desc_indexes[i]].len)) {
+				/*
+				 * The indirect desc table is not contiguous
+				 * in process VA space, we have to copy it.
+				 */
+				idesc = alloc_copy_ind_table(dev, vq,
+						&vq->desc[desc_indexes[i]]);
+				if (unlikely(!idesc))
+					break;
+
+				desc = idesc;
+			}
+
 			rte_prefetch0(desc);
 			sz = vq->desc[desc_indexes[i]].len / sizeof(*desc);
 			idx = 0;
@@ -1297,6 +1385,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 		if (unlikely(pkts[i] == NULL)) {
 			RTE_LOG(ERR, VHOST_DATA,
 				"Failed to allocate memory for mbuf.\n");
+			free_ind_table(idesc);
 			break;
 		}
 
@@ -1304,6 +1393,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 					mbuf_pool);
 		if (unlikely(err)) {
 			rte_pktmbuf_free(pkts[i]);
+			free_ind_table(idesc);
 			break;
 		}
 
@@ -1313,6 +1403,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 			zmbuf = get_zmbuf(vq);
 			if (!zmbuf) {
 				rte_pktmbuf_free(pkts[i]);
+				free_ind_table(idesc);
 				break;
 			}
 			zmbuf->mbuf = pkts[i];
@@ -1329,6 +1420,9 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 			vq->nr_zmbuf += 1;
 			TAILQ_INSERT_TAIL(&vq->zmbuf_list, zmbuf, next);
 		}
+
+		if (unlikely(!!idesc))
+			free_ind_table(idesc);
 	}
 	vq->last_avail_idx += i;
 
-- 
2.14.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH 06/12] vhost: handle virtually non-contiguous buffers in Tx
  2018-04-23 15:58 [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Maxime Coquelin
                   ` (4 preceding siblings ...)
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 05/12] vhost: add support for non-contiguous indirect descs tables Maxime Coquelin
@ 2018-04-23 15:58 ` Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 07/12] vhost: handle virtually non-contiguous buffers in Rx Maxime Coquelin
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Maxime Coquelin @ 2018-04-23 15:58 UTC (permalink / raw)
  To: dev; +Cc: Maxime Coquelin

This patch enables the handling of buffers non-contiguous in
process virtual address space in the dequeue path.

When virtio-net header doesn't fit in a single chunck, it is
copied into a local variablei before being processed.

For packet content, the copy length is limited to the chunck
size, next chuncks VAs being fetched afterward.

This issue has been assigned CVE-2018-1059.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 117 ++++++++++++++++++++++++++++++++++--------
 1 file changed, 95 insertions(+), 22 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index e43df8cb6..dcbfbd5ef 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -931,12 +931,13 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		  struct rte_mempool *mbuf_pool)
 {
 	struct vring_desc *desc;
-	uint64_t desc_addr;
+	uint64_t desc_addr, desc_gaddr;
 	uint32_t desc_avail, desc_offset;
 	uint32_t mbuf_avail, mbuf_offset;
 	uint32_t cpy_len;
-	uint64_t dlen;
+	uint64_t desc_chunck_len;
 	struct rte_mbuf *cur = m, *prev = m;
+	struct virtio_net_hdr tmp_hdr;
 	struct virtio_net_hdr *hdr = NULL;
 	/* A counter to avoid desc dead loop chain */
 	uint32_t nr_desc = 1;
@@ -951,19 +952,52 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		goto out;
 	}
 
-	dlen = desc->len;
+	desc_chunck_len = desc->len;
+	desc_gaddr = desc->addr;
 	desc_addr = vhost_iova_to_vva(dev,
-					vq, desc->addr,
-					&dlen,
+					vq, desc_gaddr,
+					&desc_chunck_len,
 					VHOST_ACCESS_RO);
-	if (unlikely(!desc_addr || dlen != desc->len)) {
+	if (unlikely(!desc_addr)) {
 		error = -1;
 		goto out;
 	}
 
 	if (virtio_net_with_host_offload(dev)) {
-		hdr = (struct virtio_net_hdr *)((uintptr_t)desc_addr);
-		rte_prefetch0(hdr);
+		if (unlikely(desc_chunck_len < sizeof(struct virtio_net_hdr))) {
+			uint64_t len = desc_chunck_len;
+			uint64_t remain = sizeof(struct virtio_net_hdr);
+			uint64_t src = desc_addr;
+			uint64_t dst = (uint64_t)(uintptr_t)&tmp_hdr;
+			uint64_t guest_addr = desc_gaddr;
+
+			/*
+			 * No luck, the virtio-net header doesn't fit
+			 * in a contiguous virtual area.
+			 */
+			while (remain) {
+				len = remain;
+				src = vhost_iova_to_vva(dev, vq,
+						guest_addr, &len,
+						VHOST_ACCESS_RO);
+				if (unlikely(!src || !len)) {
+					error = -1;
+					goto out;
+				}
+
+				rte_memcpy((void *)(uintptr_t)dst,
+						   (void *)(uintptr_t)src, len);
+
+				guest_addr += len;
+				remain -= len;
+				dst += len;
+			}
+
+			hdr = &tmp_hdr;
+		} else {
+			hdr = (struct virtio_net_hdr *)((uintptr_t)desc_addr);
+			rte_prefetch0(hdr);
+		}
 	}
 
 	/*
@@ -979,12 +1013,13 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			goto out;
 		}
 
-		dlen = desc->len;
+		desc_chunck_len = desc->len;
+		desc_gaddr = desc->addr;
 		desc_addr = vhost_iova_to_vva(dev,
-							vq, desc->addr,
-							&dlen,
+							vq, desc_gaddr,
+							&desc_chunck_len,
 							VHOST_ACCESS_RO);
-		if (unlikely(!desc_addr || dlen != desc->len)) {
+		if (unlikely(!desc_addr)) {
 			error = -1;
 			goto out;
 		}
@@ -994,19 +1029,37 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		nr_desc    += 1;
 	} else {
 		desc_avail  = desc->len - dev->vhost_hlen;
-		desc_offset = dev->vhost_hlen;
+
+		if (unlikely(desc_chunck_len < dev->vhost_hlen)) {
+			desc_chunck_len = desc_avail;
+			desc_gaddr += dev->vhost_hlen;
+			desc_addr = vhost_iova_to_vva(dev,
+					vq, desc_gaddr,
+					&desc_chunck_len,
+					VHOST_ACCESS_RO);
+			if (unlikely(!desc_addr)) {
+				error = -1;
+				goto out;
+			}
+
+			desc_offset = 0;
+		} else {
+			desc_offset = dev->vhost_hlen;
+			desc_chunck_len -= dev->vhost_hlen;
+		}
 	}
 
 	rte_prefetch0((void *)(uintptr_t)(desc_addr + desc_offset));
 
-	PRINT_PACKET(dev, (uintptr_t)(desc_addr + desc_offset), desc_avail, 0);
+	PRINT_PACKET(dev, (uintptr_t)(desc_addr + desc_offset),
+			(uint32_t)desc_chunck_len, 0);
 
 	mbuf_offset = 0;
 	mbuf_avail  = m->buf_len - RTE_PKTMBUF_HEADROOM;
 	while (1) {
 		uint64_t hpa;
 
-		cpy_len = RTE_MIN(desc_avail, mbuf_avail);
+		cpy_len = RTE_MIN(desc_chunck_len, mbuf_avail);
 
 		/*
 		 * A desc buf might across two host physical pages that are
@@ -1014,7 +1067,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		 * will be copied even though zero copy is enabled.
 		 */
 		if (unlikely(dev->dequeue_zero_copy && (hpa = gpa_to_hpa(dev,
-					desc->addr + desc_offset, cpy_len)))) {
+					desc_gaddr + desc_offset, cpy_len)))) {
 			cur->data_len = cpy_len;
 			cur->data_off = 0;
 			cur->buf_addr = (void *)(uintptr_t)(desc_addr
@@ -1029,7 +1082,8 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		} else {
 			if (likely(cpy_len > MAX_BATCH_LEN ||
 				   copy_nb >= vq->size ||
-				   (hdr && cur == m))) {
+				   (hdr && cur == m) ||
+				   desc->len != desc_chunck_len)) {
 				rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *,
 								   mbuf_offset),
 					   (void *)((uintptr_t)(desc_addr +
@@ -1050,6 +1104,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		mbuf_avail  -= cpy_len;
 		mbuf_offset += cpy_len;
 		desc_avail  -= cpy_len;
+		desc_chunck_len -= cpy_len;
 		desc_offset += cpy_len;
 
 		/* This desc reaches to its end, get the next one */
@@ -1068,11 +1123,13 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 				goto out;
 			}
 
-			dlen = desc->len;
+			desc_chunck_len = desc->len;
+			desc_gaddr = desc->addr;
 			desc_addr = vhost_iova_to_vva(dev,
-							vq, desc->addr,
-							&dlen, VHOST_ACCESS_RO);
-			if (unlikely(!desc_addr || dlen != desc->len)) {
+							vq, desc_gaddr,
+							&desc_chunck_len,
+							VHOST_ACCESS_RO);
+			if (unlikely(!desc_addr)) {
 				error = -1;
 				goto out;
 			}
@@ -1082,7 +1139,23 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			desc_offset = 0;
 			desc_avail  = desc->len;
 
-			PRINT_PACKET(dev, (uintptr_t)desc_addr, desc->len, 0);
+			PRINT_PACKET(dev, (uintptr_t)desc_addr,
+					(uint32_t)desc_chunck_len, 0);
+		} else if (unlikely(desc_chunck_len == 0)) {
+			desc_chunck_len = desc_avail;
+			desc_gaddr += desc_offset;
+			desc_addr = vhost_iova_to_vva(dev, vq,
+					desc_gaddr,
+					&desc_chunck_len,
+					VHOST_ACCESS_RO);
+			if (unlikely(!desc_addr)) {
+				error = -1;
+				goto out;
+			}
+			desc_offset = 0;
+
+			PRINT_PACKET(dev, (uintptr_t)desc_addr,
+					(uint32_t)desc_chunck_len, 0);
 		}
 
 		/*
-- 
2.14.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH 07/12] vhost: handle virtually non-contiguous buffers in Rx
  2018-04-23 15:58 [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Maxime Coquelin
                   ` (5 preceding siblings ...)
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 06/12] vhost: handle virtually non-contiguous buffers in Tx Maxime Coquelin
@ 2018-04-23 15:58 ` Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 08/12] vhost: handle virtually non-contiguous buffers in Rx-mrg Maxime Coquelin
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Maxime Coquelin @ 2018-04-23 15:58 UTC (permalink / raw)
  To: dev; +Cc: Maxime Coquelin

This patch enables the handling of buffers non-contiguous in
process virtual address space in the enqueue path when mergeable
buffers aren't used.

When virtio-net header doesn't fit in a single chunck, it is
computed in a local variable and copied to the buffer chuncks
afterwards.

For packet content, the copy length is limited to the chunck
size, next chuncks VAs being fetched afterward.

This issue has been assigned CVE-2018-1059.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 95 +++++++++++++++++++++++++++++++++++--------
 1 file changed, 77 insertions(+), 18 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index dcbfbd5ef..531425792 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -221,9 +221,9 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	uint32_t desc_avail, desc_offset;
 	uint32_t mbuf_avail, mbuf_offset;
 	uint32_t cpy_len;
-	uint64_t dlen;
+	uint64_t desc_chunck_len;
 	struct vring_desc *desc;
-	uint64_t desc_addr;
+	uint64_t desc_addr, desc_gaddr;
 	/* A counter to avoid desc dead loop chain */
 	uint16_t nr_desc = 1;
 	struct batch_copy_elem *batch_copy = vq->batch_copy_elems;
@@ -231,28 +231,74 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	int error = 0;
 
 	desc = &descs[desc_idx];
-	dlen = desc->len;
-	desc_addr = vhost_iova_to_vva(dev, vq, desc->addr,
-					&dlen, VHOST_ACCESS_RW);
+	desc_chunck_len = desc->len;
+	desc_gaddr = desc->addr;
+	desc_addr = vhost_iova_to_vva(dev, vq, desc_gaddr,
+					&desc_chunck_len, VHOST_ACCESS_RW);
 	/*
 	 * Checking of 'desc_addr' placed outside of 'unlikely' macro to avoid
 	 * performance issue with some versions of gcc (4.8.4 and 5.3.0) which
 	 * otherwise stores offset on the stack instead of in a register.
 	 */
-	if (unlikely(dlen != desc->len || desc->len < dev->vhost_hlen) ||
-			!desc_addr) {
+	if (unlikely(desc->len < dev->vhost_hlen) || !desc_addr) {
 		error = -1;
 		goto out;
 	}
 
 	rte_prefetch0((void *)(uintptr_t)desc_addr);
 
-	virtio_enqueue_offload(m, (struct virtio_net_hdr *)(uintptr_t)desc_addr);
-	vhost_log_write(dev, desc->addr, dev->vhost_hlen);
-	PRINT_PACKET(dev, (uintptr_t)desc_addr, dev->vhost_hlen, 0);
+	if (likely(desc_chunck_len >= dev->vhost_hlen)) {
+		virtio_enqueue_offload(m,
+				(struct virtio_net_hdr *)(uintptr_t)desc_addr);
+		PRINT_PACKET(dev, (uintptr_t)desc_addr, dev->vhost_hlen, 0);
+		vhost_log_write(dev, desc_gaddr, dev->vhost_hlen);
+	} else {
+		struct virtio_net_hdr vnet_hdr;
+		uint64_t remain = dev->vhost_hlen;
+		uint64_t len;
+		uint64_t src = (uint64_t)(uintptr_t)&vnet_hdr, dst;
+		uint64_t guest_addr = desc_gaddr;
+
+		virtio_enqueue_offload(m, &vnet_hdr);
+
+		while (remain) {
+			len = remain;
+			dst = vhost_iova_to_vva(dev, vq, guest_addr,
+					&len, VHOST_ACCESS_RW);
+			if (unlikely(!dst || !len)) {
+				error = -1;
+				goto out;
+			}
+
+			rte_memcpy((void *)(uintptr_t)dst,
+					(void *)(uintptr_t)src, len);
+
+			PRINT_PACKET(dev, (uintptr_t)dst, (uint32_t)len, 0);
+			vhost_log_write(dev, guest_addr, len);
+			remain -= len;
+			guest_addr += len;
+			dst += len;
+		}
+	}
 
-	desc_offset = dev->vhost_hlen;
 	desc_avail  = desc->len - dev->vhost_hlen;
+	if (unlikely(desc_chunck_len < dev->vhost_hlen)) {
+		desc_chunck_len = desc_avail;
+		desc_gaddr = desc->addr + dev->vhost_hlen;
+		desc_addr = vhost_iova_to_vva(dev,
+				vq, desc_gaddr,
+				&desc_chunck_len,
+				VHOST_ACCESS_RW);
+		if (unlikely(!desc_addr)) {
+			error = -1;
+			goto out;
+		}
+
+		desc_offset = 0;
+	} else {
+		desc_offset = dev->vhost_hlen;
+		desc_chunck_len -= dev->vhost_hlen;
+	}
 
 	mbuf_avail  = rte_pktmbuf_data_len(m);
 	mbuf_offset = 0;
@@ -278,26 +324,38 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			}
 
 			desc = &descs[desc->next];
-			dlen = desc->len;
-			desc_addr = vhost_iova_to_vva(dev, vq, desc->addr,
-							&dlen,
+			desc_chunck_len = desc->len;
+			desc_gaddr = desc->addr;
+			desc_addr = vhost_iova_to_vva(dev, vq, desc_gaddr,
+							&desc_chunck_len,
 							VHOST_ACCESS_RW);
-			if (unlikely(!desc_addr || dlen != desc->len)) {
+			if (unlikely(!desc_addr)) {
 				error = -1;
 				goto out;
 			}
 
 			desc_offset = 0;
 			desc_avail  = desc->len;
+		} else if (unlikely(desc_chunck_len == 0)) {
+			desc_chunck_len = desc_avail;
+			desc_gaddr += desc_offset;
+			desc_addr = vhost_iova_to_vva(dev,
+					vq, desc_gaddr,
+					&desc_chunck_len, VHOST_ACCESS_RW);
+			if (unlikely(!desc_addr)) {
+				error = -1;
+				goto out;
+			}
+			desc_offset = 0;
 		}
 
-		cpy_len = RTE_MIN(desc_avail, mbuf_avail);
+		cpy_len = RTE_MIN(desc_chunck_len, mbuf_avail);
 		if (likely(cpy_len > MAX_BATCH_LEN || copy_nb >= vq->size)) {
 			rte_memcpy((void *)((uintptr_t)(desc_addr +
 							desc_offset)),
 				rte_pktmbuf_mtod_offset(m, void *, mbuf_offset),
 				cpy_len);
-			vhost_log_write(dev, desc->addr + desc_offset, cpy_len);
+			vhost_log_write(dev, desc_gaddr + desc_offset, cpy_len);
 			PRINT_PACKET(dev, (uintptr_t)(desc_addr + desc_offset),
 				     cpy_len, 0);
 		} else {
@@ -305,7 +363,7 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 				(void *)((uintptr_t)(desc_addr + desc_offset));
 			batch_copy[copy_nb].src =
 				rte_pktmbuf_mtod_offset(m, void *, mbuf_offset);
-			batch_copy[copy_nb].log_addr = desc->addr + desc_offset;
+			batch_copy[copy_nb].log_addr = desc_gaddr + desc_offset;
 			batch_copy[copy_nb].len = cpy_len;
 			copy_nb++;
 		}
@@ -314,6 +372,7 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		mbuf_offset += cpy_len;
 		desc_avail  -= cpy_len;
 		desc_offset += cpy_len;
+		desc_chunck_len -= cpy_len;
 	}
 
 out:
-- 
2.14.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH 08/12] vhost: handle virtually non-contiguous buffers in Rx-mrg
  2018-04-23 15:58 [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Maxime Coquelin
                   ` (6 preceding siblings ...)
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 07/12] vhost: handle virtually non-contiguous buffers in Rx Maxime Coquelin
@ 2018-04-23 15:58 ` Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 09/12] examples/vhost: move to safe GPA translation API Maxime Coquelin
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Maxime Coquelin @ 2018-04-23 15:58 UTC (permalink / raw)
  To: dev; +Cc: Maxime Coquelin

This patch enables the handling of buffers non-contiguous in
process virtual address space in the enqueue path when mergeable
buffers are used.

When virtio-net header doesn't fit in a single chunck, it is
computed in a local variable and copied to the buffer chuncks
afterwards.

For packet content, the copy length is limited to the chunck
size, next chuncks VAs being fetched afterward.

This issue has been assigned CVE-2018-1059.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 115 ++++++++++++++++++++++++++++++++----------
 1 file changed, 87 insertions(+), 28 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 531425792..5fdd4172b 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -638,14 +638,15 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			    uint16_t num_buffers)
 {
 	uint32_t vec_idx = 0;
-	uint64_t desc_addr;
+	uint64_t desc_addr, desc_gaddr;
 	uint32_t mbuf_offset, mbuf_avail;
 	uint32_t desc_offset, desc_avail;
 	uint32_t cpy_len;
-	uint64_t dlen;
+	uint64_t desc_chunck_len;
 	uint64_t hdr_addr, hdr_phys_addr;
 	struct rte_mbuf *hdr_mbuf;
 	struct batch_copy_elem *batch_copy = vq->batch_copy_elems;
+	struct virtio_net_hdr_mrg_rxbuf tmp_hdr, *hdr = NULL;
 	uint16_t copy_nb = vq->batch_copy_nb_elems;
 	int error = 0;
 
@@ -654,26 +655,48 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		goto out;
 	}
 
-	dlen = buf_vec[vec_idx].buf_len;
-	desc_addr = vhost_iova_to_vva(dev, vq, buf_vec[vec_idx].buf_addr,
-						&dlen, VHOST_ACCESS_RW);
-	if (dlen != buf_vec[vec_idx].buf_len ||
-			buf_vec[vec_idx].buf_len < dev->vhost_hlen ||
-			!desc_addr) {
+	desc_chunck_len = buf_vec[vec_idx].buf_len;
+	desc_gaddr = buf_vec[vec_idx].buf_addr;
+	desc_addr = vhost_iova_to_vva(dev, vq,
+					desc_gaddr,
+					&desc_chunck_len,
+					VHOST_ACCESS_RW);
+	if (buf_vec[vec_idx].buf_len < dev->vhost_hlen || !desc_addr) {
 		error = -1;
 		goto out;
 	}
 
 	hdr_mbuf = m;
 	hdr_addr = desc_addr;
-	hdr_phys_addr = buf_vec[vec_idx].buf_addr;
+	if (unlikely(desc_chunck_len < dev->vhost_hlen))
+		hdr = &tmp_hdr;
+	else
+		hdr = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)hdr_addr;
+	hdr_phys_addr = desc_gaddr;
 	rte_prefetch0((void *)(uintptr_t)hdr_addr);
 
 	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) RX: num merge buffers %d\n",
 		dev->vid, num_buffers);
 
 	desc_avail  = buf_vec[vec_idx].buf_len - dev->vhost_hlen;
-	desc_offset = dev->vhost_hlen;
+	if (unlikely(desc_chunck_len < dev->vhost_hlen)) {
+		desc_chunck_len = desc_avail;
+		desc_gaddr += dev->vhost_hlen;
+		desc_addr = vhost_iova_to_vva(dev, vq,
+				desc_gaddr,
+				&desc_chunck_len,
+				VHOST_ACCESS_RW);
+		if (unlikely(!desc_addr)) {
+			error = -1;
+			goto out;
+		}
+
+		desc_offset = 0;
+	} else {
+		desc_offset = dev->vhost_hlen;
+		desc_chunck_len -= dev->vhost_hlen;
+	}
+
 
 	mbuf_avail  = rte_pktmbuf_data_len(m);
 	mbuf_offset = 0;
@@ -681,14 +704,14 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		/* done with current desc buf, get the next one */
 		if (desc_avail == 0) {
 			vec_idx++;
-			dlen = buf_vec[vec_idx].buf_len;
+			desc_chunck_len = buf_vec[vec_idx].buf_len;
+			desc_gaddr = buf_vec[vec_idx].buf_addr;
 			desc_addr =
 				vhost_iova_to_vva(dev, vq,
-					buf_vec[vec_idx].buf_addr,
-					&dlen,
+					desc_gaddr,
+					&desc_chunck_len,
 					VHOST_ACCESS_RW);
-			if (unlikely(!desc_addr ||
-					dlen != buf_vec[vec_idx].buf_len)) {
+			if (unlikely(!desc_addr)) {
 				error = -1;
 				goto out;
 			}
@@ -697,6 +720,17 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			rte_prefetch0((void *)(uintptr_t)desc_addr);
 			desc_offset = 0;
 			desc_avail  = buf_vec[vec_idx].buf_len;
+		} else if (unlikely(desc_chunck_len == 0)) {
+			desc_chunck_len = desc_avail;
+			desc_gaddr += desc_offset;
+			desc_addr = vhost_iova_to_vva(dev, vq,
+					desc_gaddr,
+					&desc_chunck_len, VHOST_ACCESS_RW);
+			if (unlikely(!desc_addr)) {
+				error = -1;
+				goto out;
+			}
+			desc_offset = 0;
 		}
 
 		/* done with current mbuf, get the next one */
@@ -708,30 +742,55 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		}
 
 		if (hdr_addr) {
-			struct virtio_net_hdr_mrg_rxbuf *hdr;
-
-			hdr = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)
-				hdr_addr;
 			virtio_enqueue_offload(hdr_mbuf, &hdr->hdr);
 			ASSIGN_UNLESS_EQUAL(hdr->num_buffers, num_buffers);
 
-			vhost_log_write(dev, hdr_phys_addr, dev->vhost_hlen);
-			PRINT_PACKET(dev, (uintptr_t)hdr_addr,
-				     dev->vhost_hlen, 0);
+			if (unlikely(hdr == &tmp_hdr)) {
+				uint64_t len;
+				uint64_t remain = dev->vhost_hlen;
+				uint64_t src = (uint64_t)(uintptr_t)hdr, dst;
+				uint64_t guest_addr = hdr_phys_addr;
+
+				while (remain) {
+					len = remain;
+					dst = vhost_iova_to_vva(dev, vq,
+							guest_addr, &len,
+							VHOST_ACCESS_RW);
+					if (unlikely(!dst || !len)) {
+						error = -1;
+						goto out;
+					}
+
+					rte_memcpy((void *)(uintptr_t)dst,
+							(void *)(uintptr_t)src,
+							len);
+
+					PRINT_PACKET(dev, (uintptr_t)dst,
+							(uint32_t)len, 0);
+					vhost_log_write(dev, guest_addr, len);
+
+					remain -= len;
+					guest_addr += len;
+					dst += len;
+				}
+			} else {
+				PRINT_PACKET(dev, (uintptr_t)hdr_addr,
+						dev->vhost_hlen, 0);
+				vhost_log_write(dev, hdr_phys_addr,
+						dev->vhost_hlen);
+			}
 
 			hdr_addr = 0;
 		}
 
-		cpy_len = RTE_MIN(desc_avail, mbuf_avail);
+		cpy_len = RTE_MIN(desc_chunck_len, mbuf_avail);
 
 		if (likely(cpy_len > MAX_BATCH_LEN || copy_nb >= vq->size)) {
 			rte_memcpy((void *)((uintptr_t)(desc_addr +
 							desc_offset)),
 				rte_pktmbuf_mtod_offset(m, void *, mbuf_offset),
 				cpy_len);
-			vhost_log_write(dev,
-				buf_vec[vec_idx].buf_addr + desc_offset,
-				cpy_len);
+			vhost_log_write(dev, desc_gaddr + desc_offset, cpy_len);
 			PRINT_PACKET(dev, (uintptr_t)(desc_addr + desc_offset),
 				cpy_len, 0);
 		} else {
@@ -739,8 +798,7 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq,
 				(void *)((uintptr_t)(desc_addr + desc_offset));
 			batch_copy[copy_nb].src =
 				rte_pktmbuf_mtod_offset(m, void *, mbuf_offset);
-			batch_copy[copy_nb].log_addr =
-				buf_vec[vec_idx].buf_addr + desc_offset;
+			batch_copy[copy_nb].log_addr = desc_gaddr + desc_offset;
 			batch_copy[copy_nb].len = cpy_len;
 			copy_nb++;
 		}
@@ -749,6 +807,7 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		mbuf_offset += cpy_len;
 		desc_avail  -= cpy_len;
 		desc_offset += cpy_len;
+		desc_chunck_len -= cpy_len;
 	}
 
 out:
-- 
2.14.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH 09/12] examples/vhost: move to safe GPA translation API
  2018-04-23 15:58 [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Maxime Coquelin
                   ` (7 preceding siblings ...)
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 08/12] vhost: handle virtually non-contiguous buffers in Rx-mrg Maxime Coquelin
@ 2018-04-23 15:58 ` Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 10/12] examples/vhost_scsi: " Maxime Coquelin
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Maxime Coquelin @ 2018-04-23 15:58 UTC (permalink / raw)
  To: dev; +Cc: Maxime Coquelin

This patch uses the new rte_vhost_va_from_guest_pa() API
to ensure the application doesn't perform out-of-bound
accesses either because of a malicious guest providing an
incorrect descriptor length, or because the buffer is
contiguous in guest physical address space but not in the
host process virtual address space.

This issue has been assigned CVE-2018-1059.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 examples/vhost/virtio_net.c | 94 +++++++++++++++++++++++++++++++++++++++------
 1 file changed, 83 insertions(+), 11 deletions(-)

diff --git a/examples/vhost/virtio_net.c b/examples/vhost/virtio_net.c
index f6e00674d..5a965a346 100644
--- a/examples/vhost/virtio_net.c
+++ b/examples/vhost/virtio_net.c
@@ -56,16 +56,20 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
 	    struct rte_mbuf *m, uint16_t desc_idx)
 {
 	uint32_t desc_avail, desc_offset;
+	uint64_t desc_chunck_len;
 	uint32_t mbuf_avail, mbuf_offset;
 	uint32_t cpy_len;
 	struct vring_desc *desc;
-	uint64_t desc_addr;
+	uint64_t desc_addr, desc_gaddr;
 	struct virtio_net_hdr virtio_hdr = {0, 0, 0, 0, 0, 0};
 	/* A counter to avoid desc dead loop chain */
 	uint16_t nr_desc = 1;
 
 	desc = &vr->desc[desc_idx];
-	desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
+	desc_chunck_len = desc->len;
+	desc_gaddr = desc->addr;
+	desc_addr = rte_vhost_va_from_guest_pa(
+			dev->mem, desc_gaddr, &desc_chunck_len);
 	/*
 	 * Checking of 'desc_addr' placed outside of 'unlikely' macro to avoid
 	 * performance issue with some versions of gcc (4.8.4 and 5.3.0) which
@@ -77,9 +81,42 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
 	rte_prefetch0((void *)(uintptr_t)desc_addr);
 
 	/* write virtio-net header */
-	*(struct virtio_net_hdr *)(uintptr_t)desc_addr = virtio_hdr;
+	if (likely(desc_chunck_len >= dev->hdr_len)) {
+		*(struct virtio_net_hdr *)(uintptr_t)desc_addr = virtio_hdr;
+		desc_offset = dev->hdr_len;
+	} else {
+		uint64_t len;
+		uint64_t remain = dev->hdr_len;
+		uint64_t src = (uint64_t)(uintptr_t)&virtio_hdr, dst;
+		uint64_t guest_addr = desc_gaddr;
+
+		while (remain) {
+			len = remain;
+			dst = rte_vhost_va_from_guest_pa(dev->mem,
+					guest_addr, &len);
+			if (unlikely(!dst || !len))
+				return -1;
+
+			rte_memcpy((void *)(uintptr_t)dst,
+					(void *)(uintptr_t)src,
+					len);
+
+			remain -= len;
+			guest_addr += len;
+			dst += len;
+		}
+
+		desc_chunck_len = desc->len - dev->hdr_len;
+		desc_gaddr += dev->hdr_len;
+		desc_addr = rte_vhost_va_from_guest_pa(
+				dev->mem, desc_gaddr,
+				&desc_chunck_len);
+		if (unlikely(!desc_addr))
+			return -1;
+
+		desc_offset = 0;
+	}
 
-	desc_offset = dev->hdr_len;
 	desc_avail  = desc->len - dev->hdr_len;
 
 	mbuf_avail  = rte_pktmbuf_data_len(m);
@@ -104,15 +141,28 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
 				return -1;
 
 			desc = &vr->desc[desc->next];
-			desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
+			desc_chunck_len = desc->len;
+			desc_gaddr = desc->addr;
+			desc_addr = rte_vhost_va_from_guest_pa(
+					dev->mem, desc_gaddr, &desc_chunck_len);
 			if (unlikely(!desc_addr))
 				return -1;
 
 			desc_offset = 0;
 			desc_avail  = desc->len;
+		} else if (unlikely(desc_chunck_len == 0)) {
+			desc_chunck_len = desc_avail;
+			desc_gaddr += desc_offset;
+			desc_addr = rte_vhost_va_from_guest_pa(dev->mem,
+					desc_gaddr,
+					&desc_chunck_len);
+			if (unlikely(!desc_addr))
+				return -1;
+
+			desc_offset = 0;
 		}
 
-		cpy_len = RTE_MIN(desc_avail, mbuf_avail);
+		cpy_len = RTE_MIN(desc_chunck_len, mbuf_avail);
 		rte_memcpy((void *)((uintptr_t)(desc_addr + desc_offset)),
 			rte_pktmbuf_mtod_offset(m, void *, mbuf_offset),
 			cpy_len);
@@ -121,6 +171,7 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
 		mbuf_offset += cpy_len;
 		desc_avail  -= cpy_len;
 		desc_offset += cpy_len;
+		desc_chunck_len -= cpy_len;
 	}
 
 	return 0;
@@ -189,8 +240,9 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
 	    struct rte_mempool *mbuf_pool)
 {
 	struct vring_desc *desc;
-	uint64_t desc_addr;
+	uint64_t desc_addr, desc_gaddr;
 	uint32_t desc_avail, desc_offset;
+	uint64_t desc_chunck_len;
 	uint32_t mbuf_avail, mbuf_offset;
 	uint32_t cpy_len;
 	struct rte_mbuf *cur = m, *prev = m;
@@ -202,7 +254,10 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
 			(desc->flags & VRING_DESC_F_INDIRECT))
 		return -1;
 
-	desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
+	desc_chunck_len = desc->len;
+	desc_gaddr = desc->addr;
+	desc_addr = rte_vhost_va_from_guest_pa(
+			dev->mem, desc_gaddr, &desc_chunck_len);
 	if (unlikely(!desc_addr))
 		return -1;
 
@@ -216,7 +271,10 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
 	 * header.
 	 */
 	desc = &vr->desc[desc->next];
-	desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
+	desc_chunck_len = desc->len;
+	desc_gaddr = desc->addr;
+	desc_addr = rte_vhost_va_from_guest_pa(
+			dev->mem, desc_gaddr, &desc_chunck_len);
 	if (unlikely(!desc_addr))
 		return -1;
 	rte_prefetch0((void *)(uintptr_t)desc_addr);
@@ -228,7 +286,7 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
 	mbuf_offset = 0;
 	mbuf_avail  = m->buf_len - RTE_PKTMBUF_HEADROOM;
 	while (1) {
-		cpy_len = RTE_MIN(desc_avail, mbuf_avail);
+		cpy_len = RTE_MIN(desc_chunck_len, mbuf_avail);
 		rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *,
 						   mbuf_offset),
 			(void *)((uintptr_t)(desc_addr + desc_offset)),
@@ -238,6 +296,7 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
 		mbuf_offset += cpy_len;
 		desc_avail  -= cpy_len;
 		desc_offset += cpy_len;
+		desc_chunck_len -= cpy_len;
 
 		/* This desc reaches to its end, get the next one */
 		if (desc_avail == 0) {
@@ -249,13 +308,26 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
 				return -1;
 			desc = &vr->desc[desc->next];
 
-			desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
+			desc_chunck_len = desc->len;
+			desc_gaddr = desc->addr;
+			desc_addr = rte_vhost_va_from_guest_pa(
+					dev->mem, desc_gaddr, &desc_chunck_len);
 			if (unlikely(!desc_addr))
 				return -1;
 			rte_prefetch0((void *)(uintptr_t)desc_addr);
 
 			desc_offset = 0;
 			desc_avail  = desc->len;
+		} else if (unlikely(desc_chunck_len == 0)) {
+			desc_chunck_len = desc_avail;
+			desc_gaddr += desc_offset;
+			desc_addr = rte_vhost_va_from_guest_pa(dev->mem,
+					desc_gaddr,
+					&desc_chunck_len);
+			if (unlikely(!desc_addr))
+				return -1;
+
+			desc_offset = 0;
 		}
 
 		/*
-- 
2.14.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH 10/12] examples/vhost_scsi: move to safe GPA translation API
  2018-04-23 15:58 [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Maxime Coquelin
                   ` (8 preceding siblings ...)
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 09/12] examples/vhost: move to safe GPA translation API Maxime Coquelin
@ 2018-04-23 15:58 ` Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 11/12] vhost/crypto: " Maxime Coquelin
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Maxime Coquelin @ 2018-04-23 15:58 UTC (permalink / raw)
  To: dev; +Cc: Maxime Coquelin

This patch uses the new rte_vhost_va_from_guest_pa() API
to ensure all the descriptor buffer is mapped contiguously
in the application virtual address space.

As the application did not checked return of previous API,
this patch just print an error if the buffer address isn't in
the vhost memory regions or if it is scattered. Ideally, it
should handle scattered buffers gracefully.

This issue has been assigned CVE-2018-1059.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 examples/vhost_scsi/vhost_scsi.c | 56 +++++++++++++++++++++++++++++++++-------
 1 file changed, 47 insertions(+), 9 deletions(-)

diff --git a/examples/vhost_scsi/vhost_scsi.c b/examples/vhost_scsi/vhost_scsi.c
index 3cb4383e9..2908ff68b 100644
--- a/examples/vhost_scsi/vhost_scsi.c
+++ b/examples/vhost_scsi/vhost_scsi.c
@@ -38,7 +38,7 @@ vhost_scsi_ctrlr_find(__rte_unused const char *ctrlr_name)
 	return g_vhost_ctrlr;
 }
 
-static uint64_t gpa_to_vva(int vid, uint64_t gpa)
+static uint64_t gpa_to_vva(int vid, uint64_t gpa, uint64_t *len)
 {
 	char path[PATH_MAX];
 	struct vhost_scsi_ctrlr *ctrlr;
@@ -58,7 +58,7 @@ static uint64_t gpa_to_vva(int vid, uint64_t gpa)
 
 	assert(ctrlr->mem != NULL);
 
-	return rte_vhost_gpa_to_vva(ctrlr->mem, gpa);
+	return rte_vhost_va_from_guest_pa(ctrlr->mem, gpa, len);
 }
 
 static struct vring_desc *
@@ -108,15 +108,29 @@ static void
 vhost_process_read_payload_chain(struct vhost_scsi_task *task)
 {
 	void *data;
+	uint64_t chunck_len;
 
 	task->iovs_cnt = 0;
+	chunck_len = task->desc->len;
 	task->resp = (void *)(uintptr_t)gpa_to_vva(task->bdev->vid,
-						   task->desc->addr);
+						   task->desc->addr,
+						   &chunck_len);
+	if (!task->resp || chunck_len != task->desc->len) {
+		fprintf(stderr, "failed to translate desc address.\n");
+		return;
+	}
 
 	while (descriptor_has_next(task->desc)) {
 		task->desc = descriptor_get_next(task->vq->desc, task->desc);
+		chunck_len = task->desc->len;
 		data = (void *)(uintptr_t)gpa_to_vva(task->bdev->vid,
-						     task->desc->addr);
+						     task->desc->addr,
+							 &chunck_len);
+		if (!data || chunck_len != task->desc->len) {
+			fprintf(stderr, "failed to translate desc address.\n");
+			return;
+		}
+
 		task->iovs[task->iovs_cnt].iov_base = data;
 		task->iovs[task->iovs_cnt].iov_len = task->desc->len;
 		task->data_len += task->desc->len;
@@ -128,12 +142,20 @@ static void
 vhost_process_write_payload_chain(struct vhost_scsi_task *task)
 {
 	void *data;
+	uint64_t chunck_len;
 
 	task->iovs_cnt = 0;
 
 	do {
+		chunck_len = task->desc->len;
 		data = (void *)(uintptr_t)gpa_to_vva(task->bdev->vid,
-						     task->desc->addr);
+						     task->desc->addr,
+							 &chunck_len);
+		if (!data || chunck_len != task->desc->len) {
+			fprintf(stderr, "failed to translate desc address.\n");
+			return;
+		}
+
 		task->iovs[task->iovs_cnt].iov_base = data;
 		task->iovs[task->iovs_cnt].iov_len = task->desc->len;
 		task->data_len += task->desc->len;
@@ -141,8 +163,12 @@ vhost_process_write_payload_chain(struct vhost_scsi_task *task)
 		task->desc = descriptor_get_next(task->vq->desc, task->desc);
 	} while (descriptor_has_next(task->desc));
 
+	chunck_len = task->desc->len;
 	task->resp = (void *)(uintptr_t)gpa_to_vva(task->bdev->vid,
-						   task->desc->addr);
+						   task->desc->addr,
+						   &chunck_len);
+	if (!task->resp || chunck_len != task->desc->len)
+		fprintf(stderr, "failed to translate desc address.\n");
 }
 
 static struct vhost_block_dev *
@@ -188,6 +214,7 @@ process_requestq(struct vhost_scsi_ctrlr *ctrlr, uint32_t q_idx)
 		int req_idx;
 		uint16_t last_idx;
 		struct vhost_scsi_task *task;
+		uint64_t chunck_len;
 
 		last_idx = scsi_vq->last_used_idx & (vq->size - 1);
 		req_idx = vq->avail->ring[last_idx];
@@ -205,16 +232,27 @@ process_requestq(struct vhost_scsi_ctrlr *ctrlr, uint32_t q_idx)
 		assert((task->desc->flags & VRING_DESC_F_INDIRECT) == 0);
 		scsi_vq->last_used_idx++;
 
+		chunck_len = task->desc->len;
 		task->req = (void *)(uintptr_t)gpa_to_vva(task->bdev->vid,
-							  task->desc->addr);
+							  task->desc->addr,
+							  &chunck_len);
+		if (!task->req || chunck_len != task->desc->len) {
+			fprintf(stderr, "failed to translate desc address.\n");
+			return;
+		}
 
 		task->desc = descriptor_get_next(task->vq->desc, task->desc);
 		if (!descriptor_has_next(task->desc)) {
 			task->dxfer_dir = SCSI_DIR_NONE;
+			chunck_len = task->desc->len;
 			task->resp = (void *)(uintptr_t)
 					      gpa_to_vva(task->bdev->vid,
-							 task->desc->addr);
-
+							 task->desc->addr,
+							 &chunck_len);
+			if (!task->resp || chunck_len != task->desc->len) {
+				fprintf(stderr, "failed to translate desc address.\n");
+				return;
+			}
 		} else if (!descriptor_is_wr(task->desc)) {
 			task->dxfer_dir = SCSI_DIR_TO_DEV;
 			vhost_process_write_payload_chain(task);
-- 
2.14.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH 11/12] vhost/crypto: move to safe GPA translation API
  2018-04-23 15:58 [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Maxime Coquelin
                   ` (9 preceding siblings ...)
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 10/12] examples/vhost_scsi: " Maxime Coquelin
@ 2018-04-23 15:58 ` Maxime Coquelin
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 12/12] vhost: deprecate unsafe " Maxime Coquelin
  2018-05-02  5:08 ` [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Yao, Lei A
  12 siblings, 0 replies; 18+ messages in thread
From: Maxime Coquelin @ 2018-04-23 15:58 UTC (permalink / raw)
  To: dev; +Cc: Maxime Coquelin

This patch uses the new rte_vhost_va_from_guest_pa() API
to ensure all the descriptor buffer is mapped contiguously
in the application virtual address space.

It does not handle buffers discontiguous in host virtual
address space, but only return an error.

This issue has been assigned CVE-2018-1059.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost_crypto.c | 65 ++++++++++++++++++++++++++++++++---------
 1 file changed, 51 insertions(+), 14 deletions(-)

diff --git a/lib/librte_vhost/vhost_crypto.c b/lib/librte_vhost/vhost_crypto.c
index c154ef673..c38eb3bb5 100644
--- a/lib/librte_vhost/vhost_crypto.c
+++ b/lib/librte_vhost/vhost_crypto.c
@@ -42,7 +42,7 @@
 		(1 << VIRTIO_CRYPTO_SERVICE_MAC) |			\
 		(1 << VIRTIO_NET_F_CTRL_VQ))
 
-#define GPA_TO_VVA(t, m, a)	((t)(uintptr_t)rte_vhost_gpa_to_vva(m, a))
+#define GPA_TO_VVA(t, m, a, l)	((t)(uintptr_t)rte_vhost_va_from_guest_pa(m, a, l))
 
 static int
 cipher_algo_transform(uint32_t virtio_cipher_algo)
@@ -476,10 +476,18 @@ static struct virtio_crypto_inhdr *
 reach_inhdr(struct vring_desc *head, struct rte_vhost_memory *mem,
 		struct vring_desc *desc)
 {
+	uint64_t dlen;
+	struct virtio_crypto_inhdr *inhdr;
+
 	while (desc->flags & VRING_DESC_F_NEXT)
 		desc = &head[desc->next];
 
-	return GPA_TO_VVA(struct virtio_crypto_inhdr *, mem, desc->addr);
+	dlen = desc->len;
+	inhdr = GPA_TO_VVA(struct virtio_crypto_inhdr *, mem, desc->addr, &dlen);
+	if (unlikely(dlen != desc->len))
+		return NULL;
+
+	return inhdr;
 }
 
 static __rte_always_inline int
@@ -516,10 +524,17 @@ copy_data(void *dst_data, struct vring_desc *head, struct rte_vhost_memory *mem,
 	uint8_t *data = dst_data;
 	uint8_t *src;
 	int left = size;
+	uint64_t dlen;
 
 	rte_prefetch0(&head[desc->next]);
 	to_copy = RTE_MIN(desc->len, (uint32_t)left);
-	src = GPA_TO_VVA(uint8_t *, mem, desc->addr);
+	dlen = desc->len;
+	src = GPA_TO_VVA(uint8_t *, mem, desc->addr, &dlen);
+	if (unlikely(!src || dlen != desc->len)) {
+		VC_LOG_ERR("Failed to map descriptor");
+		return -1;
+	}
+
 	rte_memcpy((uint8_t *)data, src, to_copy);
 	left -= to_copy;
 
@@ -527,7 +542,13 @@ copy_data(void *dst_data, struct vring_desc *head, struct rte_vhost_memory *mem,
 		desc = &head[desc->next];
 		rte_prefetch0(&head[desc->next]);
 		to_copy = RTE_MIN(desc->len, (uint32_t)left);
-		src = GPA_TO_VVA(uint8_t *, mem, desc->addr);
+		dlen = desc->len;
+		src = GPA_TO_VVA(uint8_t *, mem, desc->addr, &dlen);
+		if (unlikely(!src || dlen != desc->len)) {
+			VC_LOG_ERR("Failed to map descriptor");
+			return -1;
+		}
+
 		rte_memcpy(data + size - left, src, to_copy);
 		left -= to_copy;
 	}
@@ -547,10 +568,11 @@ get_data_ptr(struct vring_desc *head, struct rte_vhost_memory *mem,
 		struct vring_desc **cur_desc, uint32_t size)
 {
 	void *data;
+	uint64_t dlen = (*cur_desc)->len;
 
-	data = GPA_TO_VVA(void *, mem, (*cur_desc)->addr);
-	if (unlikely(!data)) {
-		VC_LOG_ERR("Failed to get object");
+	data = GPA_TO_VVA(void *, mem, (*cur_desc)->addr, &dlen);
+	if (unlikely(!data || dlen != (*cur_desc)->len)) {
+		VC_LOG_ERR("Failed to map object");
 		return NULL;
 	}
 
@@ -570,10 +592,17 @@ write_back_data(struct rte_crypto_op *op, struct vhost_crypto_data_req *vc_req)
 	int left = vc_req->wb_len;
 	uint32_t to_write;
 	uint8_t *src_data = mbuf->buf_addr, *dst;
+	uint64_t dlen;
 
 	rte_prefetch0(&head[desc->next]);
 	to_write = RTE_MIN(desc->len, (uint32_t)left);
-	dst = GPA_TO_VVA(uint8_t *, mem, desc->addr);
+	dlen = desc->len;
+	dst = GPA_TO_VVA(uint8_t *, mem, desc->addr, &dlen);
+	if (unlikely(!dst || dlen != desc->len)) {
+		VC_LOG_ERR("Failed to map descriptor");
+		return -1;
+	}
+
 	rte_memcpy(dst, src_data, to_write);
 	left -= to_write;
 	src_data += to_write;
@@ -582,7 +611,13 @@ write_back_data(struct rte_crypto_op *op, struct vhost_crypto_data_req *vc_req)
 		desc = &head[desc->next];
 		rte_prefetch0(&head[desc->next]);
 		to_write = RTE_MIN(desc->len, (uint32_t)left);
-		dst = GPA_TO_VVA(uint8_t *, mem, desc->addr);
+		dlen = desc->len;
+		dst = GPA_TO_VVA(uint8_t *, mem, desc->addr, &dlen);
+		if (unlikely(!dst || dlen != desc->len)) {
+			VC_LOG_ERR("Failed to map descriptor");
+			return -1;
+		}
+
 		rte_memcpy(dst, src_data, to_write);
 		left -= to_write;
 		src_data += to_write;
@@ -873,19 +908,21 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
 	struct virtio_crypto_inhdr *inhdr;
 	struct vring_desc *desc = NULL;
 	uint64_t session_id;
+	uint64_t dlen;
 	int err = 0;
 
 	vc_req->desc_idx = desc_idx;
 
 	if (likely(head->flags & VRING_DESC_F_INDIRECT)) {
-		head = GPA_TO_VVA(struct vring_desc *, mem, head->addr);
-		if (unlikely(!head))
-			return 0;
+		dlen = head->len;
+		desc = GPA_TO_VVA(struct vring_desc *, mem, head->addr, &dlen);
+		if (unlikely(!desc || dlen != head->len))
+			return -1;
 		desc_idx = 0;
+	} else {
+		desc = head;
 	}
 
-	desc = head;
-
 	vc_req->mem = mem;
 	vc_req->head = head;
 	vc_req->vq = vq;
-- 
2.14.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH 12/12] vhost: deprecate unsafe GPA translation API
  2018-04-23 15:58 [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Maxime Coquelin
                   ` (10 preceding siblings ...)
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 11/12] vhost/crypto: " Maxime Coquelin
@ 2018-04-23 15:58 ` Maxime Coquelin
  2018-05-02  5:08 ` [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Yao, Lei A
  12 siblings, 0 replies; 18+ messages in thread
From: Maxime Coquelin @ 2018-04-23 15:58 UTC (permalink / raw)
  To: dev; +Cc: Maxime Coquelin

This patch marks rte_vhost_gpa_to_vva() as deprecated because
it is unsafe. Application relying on this API should move
to the new rte_vhost_va_from_guest_pa() API, and check
returned length to avoid out-of-bound accesses.

This issue has been assigned CVE-2018-1059.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/rte_vhost.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
index 7d065137a..7f0cb9bc8 100644
--- a/lib/librte_vhost/rte_vhost.h
+++ b/lib/librte_vhost/rte_vhost.h
@@ -124,6 +124,11 @@ struct vhost_device_ops {
 /**
  * Convert guest physical address to host virtual address
  *
+ * This function is deprecated because unsafe.
+ * New rte_vhost_va_from_guest_pa() should be used instead to ensure
+ * guest physical ranges are fully and contiguously mapped into
+ * process virtual address space.
+ *
  * @param mem
  *  the guest memory regions
  * @param gpa
@@ -131,6 +136,7 @@ struct vhost_device_ops {
  * @return
  *  the host virtual address on success, 0 on failure
  */
+__rte_deprecated
 static __rte_always_inline uint64_t
 rte_vhost_gpa_to_vva(struct rte_vhost_memory *mem, uint64_t gpa)
 {
-- 
2.14.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
  2018-04-23 15:58 [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Maxime Coquelin
                   ` (11 preceding siblings ...)
  2018-04-23 15:58 ` [dpdk-dev] [PATCH 12/12] vhost: deprecate unsafe " Maxime Coquelin
@ 2018-05-02  5:08 ` Yao, Lei A
  2018-05-02  9:20   ` Maxime Coquelin
  12 siblings, 1 reply; 18+ messages in thread
From: Yao, Lei A @ 2018-05-02  5:08 UTC (permalink / raw)
  To: Maxime Coquelin, dev; +Cc: Bie, Tiwei

Hi, Maxime

During the 18.05-rc1 performance testing, I find this patch set will bring
slightly performance drop on mergeable and normal path, and big performance
drop on vector path. Could you have a check on this? I know this patch is 
important for security. Not sure if there is any way to improve the performance.

Mergebale	
packet size	
64	0.80%
128	-2.75%
260	-2.93%
520	-2.72%
1024	-1.18%
1500	-0.65%
	
Normal	
packet size	
64	-1.47%
128	-7.43%
260	-3.66%
520	-2.52%
1024	-1.19%
1500	-0.78%
	
Vector	
packet size	
64	-8.60%
128	-3.54%
260	-2.63%
520	-6.12%
1024	-1.05%
1500	-1.20% 

CPU info: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
OS: Ubuntu 16.04

BRs
Lei

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Maxime Coquelin
> Sent: Monday, April 23, 2018 11:58 PM
> To: dev@dpdk.org
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
> 
> This series fixes the security vulnerability referenced
> as CVE-2018-1059.
> 
> Patches are already applied to the branch, but reviews
> are encouraged. Any issues spotted would be fixed on top.
> 
> Maxime Coquelin (12):
>   vhost: fix indirect descriptors table translation size
>   vhost: check all range is mapped when translating GPAs
>   vhost: introduce safe API for GPA translation
>   vhost: ensure all range is mapped when translating QVAs
>   vhost: add support for non-contiguous indirect descs tables
>   vhost: handle virtually non-contiguous buffers in Tx
>   vhost: handle virtually non-contiguous buffers in Rx
>   vhost: handle virtually non-contiguous buffers in Rx-mrg
>   examples/vhost: move to safe GPA translation API
>   examples/vhost_scsi: move to safe GPA translation API
>   vhost/crypto: move to safe GPA translation API
>   vhost: deprecate unsafe GPA translation API
> 
>  examples/vhost/virtio_net.c            |  94 +++++++-
>  examples/vhost_scsi/vhost_scsi.c       |  56 ++++-
>  lib/librte_vhost/rte_vhost.h           |  46 ++++
>  lib/librte_vhost/rte_vhost_version.map |   4 +-
>  lib/librte_vhost/vhost.c               |  39 ++--
>  lib/librte_vhost/vhost.h               |   8 +-
>  lib/librte_vhost/vhost_crypto.c        |  65 ++++--
>  lib/librte_vhost/vhost_user.c          |  58 +++--
>  lib/librte_vhost/virtio_net.c          | 411 ++++++++++++++++++++++++++++-
> ----
>  9 files changed, 650 insertions(+), 131 deletions(-)
> 
> --
> 2.14.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
  2018-05-02  5:08 ` [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Yao, Lei A
@ 2018-05-02  9:20   ` Maxime Coquelin
  2018-05-02 12:10     ` Yao, Lei A
  0 siblings, 1 reply; 18+ messages in thread
From: Maxime Coquelin @ 2018-05-02  9:20 UTC (permalink / raw)
  To: Yao, Lei A, dev; +Cc: Bie, Tiwei

Hi Lei,

Thanks for the perf report.

On 05/02/2018 07:08 AM, Yao, Lei A wrote:
> Hi, Maxime
> 
> During the 18.05-rc1 performance testing, I find this patch set will bring
> slightly performance drop on mergeable and normal path, and big performance
> drop on vector path. Could you have a check on this? I know this patch is
> important for security. Not sure if there is any way to improve the performance.
> 

Could you please share info about the use cases you are benchmarking?

There may be ways to improve the performance, for this we would need to
profile the code to understand where the bottlenecks are.


> Mergebale	
> packet size	
> 64	0.80%
> 128	-2.75%
> 260	-2.93%
> 520	-2.72%
> 1024	-1.18%
> 1500	-0.65%
> 	
> Normal	
> packet size	
> 64	-1.47%
> 128	-7.43%
> 260	-3.66%
> 520	-2.52%
> 1024	-1.19%
> 1500	-0.78%
> 	
> Vector	
> packet size	
> 64	-8.60%
> 128	-3.54%
> 260	-2.63%
> 520	-6.12%
> 1024	-1.05%
> 1500	-1.20%

Are you sure this is only this series that induces such a big
performance drop in vector test? I.e. have you run the benchmark
just before and right after the series is applied?

Thanks,
Maxime
> CPU info: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
> OS: Ubuntu 16.04
> 
> BRs
> Lei
> 
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Maxime Coquelin
>> Sent: Monday, April 23, 2018 11:58 PM
>> To: dev@dpdk.org
>> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Subject: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
>>
>> This series fixes the security vulnerability referenced
>> as CVE-2018-1059.
>>
>> Patches are already applied to the branch, but reviews
>> are encouraged. Any issues spotted would be fixed on top.
>>
>> Maxime Coquelin (12):
>>    vhost: fix indirect descriptors table translation size
>>    vhost: check all range is mapped when translating GPAs
>>    vhost: introduce safe API for GPA translation
>>    vhost: ensure all range is mapped when translating QVAs
>>    vhost: add support for non-contiguous indirect descs tables
>>    vhost: handle virtually non-contiguous buffers in Tx
>>    vhost: handle virtually non-contiguous buffers in Rx
>>    vhost: handle virtually non-contiguous buffers in Rx-mrg
>>    examples/vhost: move to safe GPA translation API
>>    examples/vhost_scsi: move to safe GPA translation API
>>    vhost/crypto: move to safe GPA translation API
>>    vhost: deprecate unsafe GPA translation API
>>
>>   examples/vhost/virtio_net.c            |  94 +++++++-
>>   examples/vhost_scsi/vhost_scsi.c       |  56 ++++-
>>   lib/librte_vhost/rte_vhost.h           |  46 ++++
>>   lib/librte_vhost/rte_vhost_version.map |   4 +-
>>   lib/librte_vhost/vhost.c               |  39 ++--
>>   lib/librte_vhost/vhost.h               |   8 +-
>>   lib/librte_vhost/vhost_crypto.c        |  65 ++++--
>>   lib/librte_vhost/vhost_user.c          |  58 +++--
>>   lib/librte_vhost/virtio_net.c          | 411 ++++++++++++++++++++++++++++-
>> ----
>>   9 files changed, 650 insertions(+), 131 deletions(-)
>>
>> --
>> 2.14.3
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
  2018-05-02  9:20   ` Maxime Coquelin
@ 2018-05-02 12:10     ` Yao, Lei A
  2018-05-18  2:02       ` Yao, Lei A
  0 siblings, 1 reply; 18+ messages in thread
From: Yao, Lei A @ 2018-05-02 12:10 UTC (permalink / raw)
  To: Maxime Coquelin, dev; +Cc: Bie, Tiwei



> -----Original Message-----
> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> Sent: Wednesday, May 2, 2018 5:20 PM
> To: Yao, Lei A <lei.a.yao@intel.com>; dev@dpdk.org
> Cc: Bie, Tiwei <tiwei.bie@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
> 
> Hi Lei,
> 
> Thanks for the perf report.
> 
> On 05/02/2018 07:08 AM, Yao, Lei A wrote:
> > Hi, Maxime
> >
> > During the 18.05-rc1 performance testing, I find this patch set will bring
> > slightly performance drop on mergeable and normal path, and big
> performance
> > drop on vector path. Could you have a check on this? I know this patch is
> > important for security. Not sure if there is any way to improve the
> performance.
> >
> 
> Could you please share info about the use cases you are benchmarking?
> 
I run vhost/virtio loopback test .
> There may be ways to improve the performance, for this we would need to
> profile the code to understand where the bottlenecks are.
> 
> 
> > Mergebale
> > packet size
> > 64	0.80%
> > 128	-2.75%
> > 260	-2.93%
> > 520	-2.72%
> > 1024	-1.18%
> > 1500	-0.65%
> >
> > Normal
> > packet size
> > 64	-1.47%
> > 128	-7.43%
> > 260	-3.66%
> > 520	-2.52%
> > 1024	-1.19%
> > 1500	-0.78%
> >
> > Vector
> > packet size
> > 64	-8.60%
> > 128	-3.54%
> > 260	-2.63%
> > 520	-6.12%
> > 1024	-1.05%
> > 1500	-1.20%
> 
> Are you sure this is only this series that induces such a big
> performance drop in vector test? I.e. have you run the benchmark
> just before and right after the series is applied?
Yes. The performance drop I list here is just compared before and after your 
patch set. The key patch bring performance drop is this commit
" Commit hash:	41333fba5b98945b8051e7b48f8fe47432cdd356"
vhost: introduce safe API for GPA translation.

Between 18.02 and 18.05-rc1, there are some other performance drop, but not
so large. I need more git bisect work to identify.  


> 
> Thanks,
> Maxime
> > CPU info: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
> > OS: Ubuntu 16.04
> >
> > BRs
> > Lei
> >
> >> -----Original Message-----
> >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Maxime
> Coquelin
> >> Sent: Monday, April 23, 2018 11:58 PM
> >> To: dev@dpdk.org
> >> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> >> Subject: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
> >>
> >> This series fixes the security vulnerability referenced
> >> as CVE-2018-1059.
> >>
> >> Patches are already applied to the branch, but reviews
> >> are encouraged. Any issues spotted would be fixed on top.
> >>
> >> Maxime Coquelin (12):
> >>    vhost: fix indirect descriptors table translation size
> >>    vhost: check all range is mapped when translating GPAs
> >>    vhost: introduce safe API for GPA translation
> >>    vhost: ensure all range is mapped when translating QVAs
> >>    vhost: add support for non-contiguous indirect descs tables
> >>    vhost: handle virtually non-contiguous buffers in Tx
> >>    vhost: handle virtually non-contiguous buffers in Rx
> >>    vhost: handle virtually non-contiguous buffers in Rx-mrg
> >>    examples/vhost: move to safe GPA translation API
> >>    examples/vhost_scsi: move to safe GPA translation API
> >>    vhost/crypto: move to safe GPA translation API
> >>    vhost: deprecate unsafe GPA translation API
> >>
> >>   examples/vhost/virtio_net.c            |  94 +++++++-
> >>   examples/vhost_scsi/vhost_scsi.c       |  56 ++++-
> >>   lib/librte_vhost/rte_vhost.h           |  46 ++++
> >>   lib/librte_vhost/rte_vhost_version.map |   4 +-
> >>   lib/librte_vhost/vhost.c               |  39 ++--
> >>   lib/librte_vhost/vhost.h               |   8 +-
> >>   lib/librte_vhost/vhost_crypto.c        |  65 ++++--
> >>   lib/librte_vhost/vhost_user.c          |  58 +++--
> >>   lib/librte_vhost/virtio_net.c          | 411
> ++++++++++++++++++++++++++++-
> >> ----
> >>   9 files changed, 650 insertions(+), 131 deletions(-)
> >>
> >> --
> >> 2.14.3
> >

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
  2018-05-02 12:10     ` Yao, Lei A
@ 2018-05-18  2:02       ` Yao, Lei A
  2018-05-18  7:15         ` Maxime Coquelin
  0 siblings, 1 reply; 18+ messages in thread
From: Yao, Lei A @ 2018-05-18  2:02 UTC (permalink / raw)
  To: 'Maxime Coquelin', 'dev@dpdk.org'; +Cc: Bie, Tiwei

Hi, Maxime

Any idea for this performance drop? Will we improve it in this release
or it will be long term work? Thanks.

BRs
Lei

> -----Original Message-----
> From: Yao, Lei A
> Sent: Wednesday, May 2, 2018 8:10 PM
> To: Maxime Coquelin <maxime.coquelin@redhat.com>; dev@dpdk.org
> Cc: Bie, Tiwei <tiwei.bie@intel.com>
> Subject: RE: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
> 
> 
> 
> > -----Original Message-----
> > From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> > Sent: Wednesday, May 2, 2018 5:20 PM
> > To: Yao, Lei A <lei.a.yao@intel.com>; dev@dpdk.org
> > Cc: Bie, Tiwei <tiwei.bie@intel.com>
> > Subject: Re: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
> >
> > Hi Lei,
> >
> > Thanks for the perf report.
> >
> > On 05/02/2018 07:08 AM, Yao, Lei A wrote:
> > > Hi, Maxime
> > >
> > > During the 18.05-rc1 performance testing, I find this patch set will bring
> > > slightly performance drop on mergeable and normal path, and big
> > performance
> > > drop on vector path. Could you have a check on this? I know this patch is
> > > important for security. Not sure if there is any way to improve the
> > performance.
> > >
> >
> > Could you please share info about the use cases you are benchmarking?
> >
> I run vhost/virtio loopback test .
> > There may be ways to improve the performance, for this we would need to
> > profile the code to understand where the bottlenecks are.
> >
> >
> > > Mergebale
> > > packet size
> > > 64	0.80%
> > > 128	-2.75%
> > > 260	-2.93%
> > > 520	-2.72%
> > > 1024	-1.18%
> > > 1500	-0.65%
> > >
> > > Normal
> > > packet size
> > > 64	-1.47%
> > > 128	-7.43%
> > > 260	-3.66%
> > > 520	-2.52%
> > > 1024	-1.19%
> > > 1500	-0.78%
> > >
> > > Vector
> > > packet size
> > > 64	-8.60%
> > > 128	-3.54%
> > > 260	-2.63%
> > > 520	-6.12%
> > > 1024	-1.05%
> > > 1500	-1.20%
> >
> > Are you sure this is only this series that induces such a big
> > performance drop in vector test? I.e. have you run the benchmark
> > just before and right after the series is applied?
> Yes. The performance drop I list here is just compared before and after your
> patch set. The key patch bring performance drop is this commit
> " Commit hash:	41333fba5b98945b8051e7b48f8fe47432cdd356"
> vhost: introduce safe API for GPA translation.
> 
> Between 18.02 and 18.05-rc1, there are some other performance drop, but
> not
> so large. I need more git bisect work to identify.
> 
> 
> >
> > Thanks,
> > Maxime
> > > CPU info: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
> > > OS: Ubuntu 16.04
> > >
> > > BRs
> > > Lei
> > >
> > >> -----Original Message-----
> > >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Maxime
> > Coquelin
> > >> Sent: Monday, April 23, 2018 11:58 PM
> > >> To: dev@dpdk.org
> > >> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> > >> Subject: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
> > >>
> > >> This series fixes the security vulnerability referenced
> > >> as CVE-2018-1059.
> > >>
> > >> Patches are already applied to the branch, but reviews
> > >> are encouraged. Any issues spotted would be fixed on top.
> > >>
> > >> Maxime Coquelin (12):
> > >>    vhost: fix indirect descriptors table translation size
> > >>    vhost: check all range is mapped when translating GPAs
> > >>    vhost: introduce safe API for GPA translation
> > >>    vhost: ensure all range is mapped when translating QVAs
> > >>    vhost: add support for non-contiguous indirect descs tables
> > >>    vhost: handle virtually non-contiguous buffers in Tx
> > >>    vhost: handle virtually non-contiguous buffers in Rx
> > >>    vhost: handle virtually non-contiguous buffers in Rx-mrg
> > >>    examples/vhost: move to safe GPA translation API
> > >>    examples/vhost_scsi: move to safe GPA translation API
> > >>    vhost/crypto: move to safe GPA translation API
> > >>    vhost: deprecate unsafe GPA translation API
> > >>
> > >>   examples/vhost/virtio_net.c            |  94 +++++++-
> > >>   examples/vhost_scsi/vhost_scsi.c       |  56 ++++-
> > >>   lib/librte_vhost/rte_vhost.h           |  46 ++++
> > >>   lib/librte_vhost/rte_vhost_version.map |   4 +-
> > >>   lib/librte_vhost/vhost.c               |  39 ++--
> > >>   lib/librte_vhost/vhost.h               |   8 +-
> > >>   lib/librte_vhost/vhost_crypto.c        |  65 ++++--
> > >>   lib/librte_vhost/vhost_user.c          |  58 +++--
> > >>   lib/librte_vhost/virtio_net.c          | 411
> > ++++++++++++++++++++++++++++-
> > >> ----
> > >>   9 files changed, 650 insertions(+), 131 deletions(-)
> > >>
> > >> --
> > >> 2.14.3
> > >

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
  2018-05-18  2:02       ` Yao, Lei A
@ 2018-05-18  7:15         ` Maxime Coquelin
  0 siblings, 0 replies; 18+ messages in thread
From: Maxime Coquelin @ 2018-05-18  7:15 UTC (permalink / raw)
  To: Yao, Lei A, 'dev@dpdk.org'; +Cc: Bie, Tiwei

Hi,

On 05/18/2018 04:02 AM, Yao, Lei A wrote:
> Hi, Maxime
> 
> Any idea for this performance drop? Will we improve it in this release
> or it will be long term work? Thanks.

No, it will not be improved for this release. I'll create a Bz to track
this so that the release note can mention it.

Regards,
Maxime

> BRs
> Lei
> 
>> -----Original Message-----
>> From: Yao, Lei A
>> Sent: Wednesday, May 2, 2018 8:10 PM
>> To: Maxime Coquelin <maxime.coquelin@redhat.com>; dev@dpdk.org
>> Cc: Bie, Tiwei <tiwei.bie@intel.com>
>> Subject: RE: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
>>
>>
>>
>>> -----Original Message-----
>>> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
>>> Sent: Wednesday, May 2, 2018 5:20 PM
>>> To: Yao, Lei A <lei.a.yao@intel.com>; dev@dpdk.org
>>> Cc: Bie, Tiwei <tiwei.bie@intel.com>
>>> Subject: Re: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
>>>
>>> Hi Lei,
>>>
>>> Thanks for the perf report.
>>>
>>> On 05/02/2018 07:08 AM, Yao, Lei A wrote:
>>>> Hi, Maxime
>>>>
>>>> During the 18.05-rc1 performance testing, I find this patch set will bring
>>>> slightly performance drop on mergeable and normal path, and big
>>> performance
>>>> drop on vector path. Could you have a check on this? I know this patch is
>>>> important for security. Not sure if there is any way to improve the
>>> performance.
>>>>
>>>
>>> Could you please share info about the use cases you are benchmarking?
>>>
>> I run vhost/virtio loopback test .
>>> There may be ways to improve the performance, for this we would need to
>>> profile the code to understand where the bottlenecks are.
>>>
>>>
>>>> Mergebale
>>>> packet size
>>>> 64	0.80%
>>>> 128	-2.75%
>>>> 260	-2.93%
>>>> 520	-2.72%
>>>> 1024	-1.18%
>>>> 1500	-0.65%
>>>>
>>>> Normal
>>>> packet size
>>>> 64	-1.47%
>>>> 128	-7.43%
>>>> 260	-3.66%
>>>> 520	-2.52%
>>>> 1024	-1.19%
>>>> 1500	-0.78%
>>>>
>>>> Vector
>>>> packet size
>>>> 64	-8.60%
>>>> 128	-3.54%
>>>> 260	-2.63%
>>>> 520	-6.12%
>>>> 1024	-1.05%
>>>> 1500	-1.20%
>>>
>>> Are you sure this is only this series that induces such a big
>>> performance drop in vector test? I.e. have you run the benchmark
>>> just before and right after the series is applied?
>> Yes. The performance drop I list here is just compared before and after your
>> patch set. The key patch bring performance drop is this commit
>> " Commit hash:	41333fba5b98945b8051e7b48f8fe47432cdd356"
>> vhost: introduce safe API for GPA translation.
>>
>> Between 18.02 and 18.05-rc1, there are some other performance drop, but
>> not
>> so large. I need more git bisect work to identify.
>>
>>
>>>
>>> Thanks,
>>> Maxime
>>>> CPU info: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
>>>> OS: Ubuntu 16.04
>>>>
>>>> BRs
>>>> Lei
>>>>
>>>>> -----Original Message-----
>>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Maxime
>>> Coquelin
>>>>> Sent: Monday, April 23, 2018 11:58 PM
>>>>> To: dev@dpdk.org
>>>>> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
>>>>> Subject: [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes
>>>>>
>>>>> This series fixes the security vulnerability referenced
>>>>> as CVE-2018-1059.
>>>>>
>>>>> Patches are already applied to the branch, but reviews
>>>>> are encouraged. Any issues spotted would be fixed on top.
>>>>>
>>>>> Maxime Coquelin (12):
>>>>>     vhost: fix indirect descriptors table translation size
>>>>>     vhost: check all range is mapped when translating GPAs
>>>>>     vhost: introduce safe API for GPA translation
>>>>>     vhost: ensure all range is mapped when translating QVAs
>>>>>     vhost: add support for non-contiguous indirect descs tables
>>>>>     vhost: handle virtually non-contiguous buffers in Tx
>>>>>     vhost: handle virtually non-contiguous buffers in Rx
>>>>>     vhost: handle virtually non-contiguous buffers in Rx-mrg
>>>>>     examples/vhost: move to safe GPA translation API
>>>>>     examples/vhost_scsi: move to safe GPA translation API
>>>>>     vhost/crypto: move to safe GPA translation API
>>>>>     vhost: deprecate unsafe GPA translation API
>>>>>
>>>>>    examples/vhost/virtio_net.c            |  94 +++++++-
>>>>>    examples/vhost_scsi/vhost_scsi.c       |  56 ++++-
>>>>>    lib/librte_vhost/rte_vhost.h           |  46 ++++
>>>>>    lib/librte_vhost/rte_vhost_version.map |   4 +-
>>>>>    lib/librte_vhost/vhost.c               |  39 ++--
>>>>>    lib/librte_vhost/vhost.h               |   8 +-
>>>>>    lib/librte_vhost/vhost_crypto.c        |  65 ++++--
>>>>>    lib/librte_vhost/vhost_user.c          |  58 +++--
>>>>>    lib/librte_vhost/virtio_net.c          | 411
>>> ++++++++++++++++++++++++++++-
>>>>> ----
>>>>>    9 files changed, 650 insertions(+), 131 deletions(-)
>>>>>
>>>>> --
>>>>> 2.14.3
>>>>

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2018-05-18  7:15 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-23 15:58 [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Maxime Coquelin
2018-04-23 15:58 ` [dpdk-dev] [PATCH 01/12] vhost: fix indirect descriptors table translation size Maxime Coquelin
2018-04-23 15:58 ` [dpdk-dev] [PATCH 02/12] vhost: check all range is mapped when translating GPAs Maxime Coquelin
2018-04-23 15:58 ` [dpdk-dev] [PATCH 03/12] vhost: introduce safe API for GPA translation Maxime Coquelin
2018-04-23 15:58 ` [dpdk-dev] [PATCH 04/12] vhost: ensure all range is mapped when translating QVAs Maxime Coquelin
2018-04-23 15:58 ` [dpdk-dev] [PATCH 05/12] vhost: add support for non-contiguous indirect descs tables Maxime Coquelin
2018-04-23 15:58 ` [dpdk-dev] [PATCH 06/12] vhost: handle virtually non-contiguous buffers in Tx Maxime Coquelin
2018-04-23 15:58 ` [dpdk-dev] [PATCH 07/12] vhost: handle virtually non-contiguous buffers in Rx Maxime Coquelin
2018-04-23 15:58 ` [dpdk-dev] [PATCH 08/12] vhost: handle virtually non-contiguous buffers in Rx-mrg Maxime Coquelin
2018-04-23 15:58 ` [dpdk-dev] [PATCH 09/12] examples/vhost: move to safe GPA translation API Maxime Coquelin
2018-04-23 15:58 ` [dpdk-dev] [PATCH 10/12] examples/vhost_scsi: " Maxime Coquelin
2018-04-23 15:58 ` [dpdk-dev] [PATCH 11/12] vhost/crypto: " Maxime Coquelin
2018-04-23 15:58 ` [dpdk-dev] [PATCH 12/12] vhost: deprecate unsafe " Maxime Coquelin
2018-05-02  5:08 ` [dpdk-dev] [PATCH 00/12] Vhost: CVE-2018-1059 fixes Yao, Lei A
2018-05-02  9:20   ` Maxime Coquelin
2018-05-02 12:10     ` Yao, Lei A
2018-05-18  2:02       ` Yao, Lei A
2018-05-18  7:15         ` Maxime Coquelin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).