DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/14] implement packed virtqueues
@ 2018-01-29 14:11 Jens Freimann
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 01/14] net/virtio: vring init for packed queues Jens Freimann
                   ` (14 more replies)
  0 siblings, 15 replies; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

This is a basic implementation of packed virtqueues as specified in the
Virtio 1.1 draft. A compiled version of the current draft is available
at https://github.com/oasis-tcs/virtio-docs.git (or as .pdf at
https://github.com/oasis-tcs/virtio-docs/blob/master/virtio-v1.1-packed-wd07.pdf

It does not implement yet indirect descriptors and checksum offloading.
VIRTIO_F_IN_ORER is not implemented, as well as support for mergeable buffers
with packed queues. Patches for this will follow soon.  

A packed virtqueue is different from a split virtqueue in that it
consists of only a single descriptor ring that replaces available and
used ring, index and descriptor buffer. 

Each descriptor is readable and writable and has a flags field. These flags
will mark if a descriptor is available or used.  To detect new available descriptors
even after the ring has wrapped, device and driver each have a
single-bit wrap counter that is flipped from 0 to 1 and vice versa every time
the last descriptor in the ring is used/made available. 

The idea behind this is to 1. improve performance by avoiding cache misses
and 2. be easier for devices to implement.

Regarding performance: with these patches I get 21.13 Mpps on my system
as compared to 18.8 Mpps with the virtio 1.0 code. Packet size was 64
bytes, 0.05% acceptable loss.  Test setup is described as in
http://dpdk.org/doc/guides/howto/pvp_reference_benchmark.html 

Packet generator:
MoonGen
Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz
Intel X710 NIC
RHEL 7.4

Device under test:
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
Intel X710 NIC
RHEL 7.4

VM on DuT: RHEL7.4

I plan to do more performance test with bigger frame sizes.  

This patch series is based on a prototype implemented by Yuanhan Liu and
Tiwei Bie. 


regards,

Jens Freimann (10):
  net/virtio: vring init for packed queues
  net/virtio: don't call virtio_disable_intr() for packed queues
  net/virtio: add virtio 1.1 defines
  net/virtio: add packed virtqueue helpers
  net/virtio: don't dump split virtqueue data
  net/virtio: implement transmit path for packed queues
  vhost: add virtio 1.1 defines
  vhost: add helpers for packed virtqueues
  vhost: dequeue for packed queues
  vhost: packed queue enqueue path

Yuanhan Liu (4):
  net/virtio-user: add option to use packed queues
  net/virtio: implement receive path for packed queues
  vhost: vring address setup for packed queues
  vhost: enable packed virtqueues

 drivers/net/virtio/Makefile                      |   1 +
 drivers/net/virtio/virtio_ethdev.c               |  50 ++--
 drivers/net/virtio/virtio_ethdev.h               |   5 +
 drivers/net/virtio/virtio_pci.h                  |   8 +
 drivers/net/virtio/virtio_ring.h                 |  63 ++++-
 drivers/net/virtio/virtio_rxtx.c                 | 141 +++++++++-
 drivers/net/virtio/virtio_rxtx_1.1.c             | 161 ++++++++++++
 drivers/net/virtio/virtio_user/virtio_user_dev.c |  12 +-
 drivers/net/virtio/virtio_user/virtio_user_dev.h |   3 +-
 drivers/net/virtio/virtio_user_ethdev.c          |  15 +-
 drivers/net/virtio/virtqueue.c                   |   3 +
 drivers/net/virtio/virtqueue.h                   |  14 +-
 lib/librte_vhost/vhost.c                         |   5 +
 lib/librte_vhost/vhost.h                         |   7 +-
 lib/librte_vhost/vhost_user.c                    |  17 +-
 lib/librte_vhost/virtio-1.1.h                    |  63 +++++
 lib/librte_vhost/virtio_net.c                    | 321 ++++++++++++++++++++++-
 17 files changed, 859 insertions(+), 30 deletions(-)
 create mode 100644 drivers/net/virtio/virtio_rxtx_1.1.c
 create mode 100644 lib/librte_vhost/virtio-1.1.h

-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 01/14] net/virtio: vring init for packed queues
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
@ 2018-01-29 14:11 ` Jens Freimann
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 02/14] net/virtio: don't call virtio_disable_intr() " Jens Freimann
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

Add and initialize descriptor data structures.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
 drivers/net/virtio/virtio_ethdev.c | 22 ++++++++++++----------
 drivers/net/virtio/virtio_pci.h    |  8 ++++++++
 drivers/net/virtio/virtio_ring.h   | 25 +++++++++++++++++++++++--
 drivers/net/virtio/virtqueue.h     | 10 ++++++++++
 4 files changed, 53 insertions(+), 12 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 65cb71fa6..825b6c303 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -310,19 +310,21 @@ virtio_init_vring(struct virtqueue *vq)
 
 	PMD_INIT_FUNC_TRACE();
 
-	/*
-	 * Reinitialise since virtio port might have been stopped and restarted
-	 */
 	memset(ring_mem, 0, vq->vq_ring_size);
-	vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
-	vq->vq_used_cons_idx = 0;
-	vq->vq_desc_head_idx = 0;
-	vq->vq_avail_idx = 0;
-	vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
+	vring_init(vq->hw, vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
+
 	vq->vq_free_cnt = vq->vq_nentries;
 	memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
+	vq->vq_used_cons_idx = 0;
+	vq->vq_avail_idx     = 0;
+	if (vtpci_packed_queue(vq->hw)) {
+		vring_desc_init_1_1(vr, size);
+	} else {
+		vq->vq_desc_head_idx = 0;
+		vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
 
-	vring_desc_init(vr->desc, size);
+		vring_desc_init(vr->desc, size);
+	}
 
 	/*
 	 * Disable device(host) interrupting guest
@@ -397,7 +399,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
 	/*
 	 * Reserve a memzone for vring elements
 	 */
-	size = vring_size(vq_size, VIRTIO_PCI_VRING_ALIGN);
+	size = vring_size(hw, vq_size, VIRTIO_PCI_VRING_ALIGN);
 	vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
 	PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d",
 		     size, vq->vq_ring_size);
diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
index a28ba8339..307a116d2 100644
--- a/drivers/net/virtio/virtio_pci.h
+++ b/drivers/net/virtio/virtio_pci.h
@@ -112,6 +112,8 @@ struct virtnet_ctl;
 
 #define VIRTIO_F_VERSION_1		32
 #define VIRTIO_F_IOMMU_PLATFORM	33
+#define VIRTIO_F_PACKED			34
+#define VIRTIO_F_IN_ORDER		35
 
 /*
  * Some VirtIO feature bits (currently bits 28 through 31) are
@@ -304,6 +306,12 @@ vtpci_with_feature(struct virtio_hw *hw, uint64_t bit)
 	return (hw->guest_features & (1ULL << bit)) != 0;
 }
 
+static inline int
+vtpci_packed_queue(struct virtio_hw *hw)
+{
+	return vtpci_with_feature(hw, VIRTIO_F_PACKED);
+}
+
 /*
  * Function declaration from virtio_pci.c
  */
diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index 9e3c2a015..1cafef41b 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -9,6 +9,7 @@
 
 #include <rte_common.h>
 
+
 /* This marks a buffer as continuing via the next field. */
 #define VRING_DESC_F_NEXT       1
 /* This marks a buffer as write-only (otherwise read-only). */
@@ -54,11 +55,23 @@ struct vring_used {
 	struct vring_used_elem ring[0];
 };
 
+/* For support of packed virtqueues in Virtio 1.1 the format of descriptors
+ * looks like this.
+ */
+struct vring_desc_1_1 {
+	uint64_t addr;
+	uint32_t len;
+	uint16_t index;
+	uint16_t flags;
+};
+
+
 struct vring {
 	unsigned int num;
 	struct vring_desc  *desc;
 	struct vring_avail *avail;
 	struct vring_used  *used;
+	struct vring_desc_1_1 *desc_1_1;
 };
 
 /* The standard layout for the ring is a continuous chunk of memory which
@@ -95,10 +108,13 @@ struct vring {
 #define vring_avail_event(vr) (*(uint16_t *)&(vr)->used->ring[(vr)->num])
 
 static inline size_t
-vring_size(unsigned int num, unsigned long align)
+vring_size(struct virtio_hw *hw, unsigned int num, unsigned long align)
 {
 	size_t size;
 
+	if (vtpci_packed_queue(hw))
+		return num * sizeof(struct vring_desc_1_1);
+
 	size = num * sizeof(struct vring_desc);
 	size += sizeof(struct vring_avail) + (num * sizeof(uint16_t));
 	size = RTE_ALIGN_CEIL(size, align);
@@ -108,10 +124,15 @@ vring_size(unsigned int num, unsigned long align)
 }
 
 static inline void
-vring_init(struct vring *vr, unsigned int num, uint8_t *p,
+vring_init(struct virtio_hw *hw, struct vring *vr, unsigned int num, uint8_t *p,
 	unsigned long align)
 {
 	vr->num = num;
+	if (vtpci_packed_queue(hw)) {
+		vr->desc_1_1 = (struct vring_desc_1_1 *)p;
+		return;
+	}
+
 	vr->desc = (struct vring_desc *) p;
 	vr->avail = (struct vring_avail *) (p +
 		num * sizeof(struct vring_desc));
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 2d9599218..0cc9268a4 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -245,6 +245,16 @@ struct virtio_tx_region {
 			   __attribute__((__aligned__(16)));
 };
 
+static inline void
+vring_desc_init_1_1(struct vring *vr, int n)
+{
+	int i;
+	for (i = 0; i < n; i++) {
+		struct vring_desc_1_1 *desc = &vr->desc_1_1[i];
+		desc->index = i;
+	}
+}
+
 /* Chain all the descriptors in the ring with an END */
 static inline void
 vring_desc_init(struct vring_desc *dp, uint16_t n)
-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 02/14] net/virtio: don't call virtio_disable_intr() for packed queues
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 01/14] net/virtio: vring init for packed queues Jens Freimann
@ 2018-01-29 14:11 ` Jens Freimann
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 03/14] net/virtio: add virtio 1.1 defines Jens Freimann
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

When VIRTIO_F_PACKED_RING is set, don't call virtio_disable_intr().
This function accesses data structures which are not
available when packed virtqueues are enabled.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
 drivers/net/virtio/virtio_ethdev.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 825b6c303..76879d87f 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -324,12 +324,11 @@ virtio_init_vring(struct virtqueue *vq)
 		vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
 
 		vring_desc_init(vr->desc, size);
+		/*
+		 * Disable device(host) interrupting guest
+		 */
+		virtqueue_disable_intr(vq);
 	}
-
-	/*
-	 * Disable device(host) interrupting guest
-	 */
-	virtqueue_disable_intr(vq);
 }
 
 static int
@@ -747,7 +746,8 @@ virtio_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
 	struct virtnet_rx *rxvq = dev->data->rx_queues[queue_id];
 	struct virtqueue *vq = rxvq->vq;
 
-	virtqueue_disable_intr(vq);
+	if (!vtpci_packed_queue(vq->hw))
+		virtqueue_disable_intr(vq);
 	return 0;
 }
 
-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 03/14] net/virtio: add virtio 1.1 defines
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 01/14] net/virtio: vring init for packed queues Jens Freimann
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 02/14] net/virtio: don't call virtio_disable_intr() " Jens Freimann
@ 2018-01-29 14:11 ` Jens Freimann
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 04/14] net/virtio: add packed virtqueue helpers Jens Freimann
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
 drivers/net/virtio/virtio_ring.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index 1cafef41b..839359444 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -17,6 +17,12 @@
 /* This means the buffer contains a list of buffer descriptors. */
 #define VRING_DESC_F_INDIRECT   4
 
+#define VIRTQ_DESC_F_AVAIL     7
+#define VIRTQ_DESC_F_USED      15
+#define DESC_USED (1ULL << VIRTQ_DESC_F_USED)
+#define DESC_AVAIL (1ULL << VIRTQ_DESC_F_AVAIL)
+
+
 /* The Host uses this in used->flags to advise the Guest: don't kick me
  * when you add a buffer.  It's unreliable, so it's simply an
  * optimization.  Guest will still kick if it's out of buffers. */
-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 04/14] net/virtio: add packed virtqueue helpers
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
                   ` (2 preceding siblings ...)
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 03/14] net/virtio: add virtio 1.1 defines Jens Freimann
@ 2018-01-29 14:11 ` Jens Freimann
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 05/14] net/virtio: don't dump split virtqueue data Jens Freimann
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

Add helper functions to set/clear and check descriptor flags.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
 drivers/net/virtio/virtio_ring.h | 32 ++++++++++++++++++++++++++++++++
 drivers/net/virtio/virtqueue.c   |  3 +++
 2 files changed, 35 insertions(+)

diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index 839359444..47df96aac 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -74,12 +74,44 @@ struct vring_desc_1_1 {
 
 struct vring {
 	unsigned int num;
+	unsigned int avail_wrap_counter;
 	struct vring_desc  *desc;
 	struct vring_avail *avail;
 	struct vring_used  *used;
 	struct vring_desc_1_1 *desc_1_1;
 };
 
+static inline void toggle_wrap_counter(struct vring *vr)
+{
+	vr->avail_wrap_counter ^= 1;
+}
+
+static inline void _set_desc_avail(struct vring_desc_1_1 *desc,
+				   int wrap_counter)
+{
+	uint16_t flags = desc->flags;
+
+	if (wrap_counter) {
+		flags |= DESC_AVAIL;
+		flags &= ~DESC_USED;
+	} else {
+		flags &= ~DESC_AVAIL;
+		flags |= DESC_USED;
+	}
+
+	desc->flags = flags;
+}
+
+static inline void set_desc_avail(struct vring *vr, struct vring_desc_1_1 *desc)
+{
+	_set_desc_avail(desc, vr->avail_wrap_counter);
+}
+
+static inline int desc_is_used(struct vring_desc_1_1 *desc)
+{
+	return !(desc->flags & DESC_AVAIL) == !(desc->flags & DESC_USED);
+}
+
 /* The standard layout for the ring is a continuous chunk of memory which
  * looks like this.  We assume num is a power of 2.
  *
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 1ada4fe08..052444756 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -43,6 +43,9 @@ virtqueue_rxvq_flush(struct virtqueue *vq)
 	uint16_t used_idx, desc_idx;
 	uint16_t nb_used, i;
 
+	if (vtpci_packed_queue(vq->hw))
+		return;
+
 	nb_used = VIRTQUEUE_NUSED(vq);
 
 	for (i = 0; i < nb_used; i++) {
-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 05/14] net/virtio: don't dump split virtqueue data
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
                   ` (3 preceding siblings ...)
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 04/14] net/virtio: add packed virtqueue helpers Jens Freimann
@ 2018-01-29 14:11 ` Jens Freimann
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 06/14] net/virtio-user: add option to use packed queues Jens Freimann
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

VIRTQUEUE_DUMP access split virtqueue data which is not
correct when packed virtqueues are used.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
 drivers/net/virtio/virtqueue.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 0cc9268a4..63ced5b6f 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -348,7 +348,9 @@ virtqueue_notify(struct virtqueue *vq)
 }
 
 #ifdef RTE_LIBRTE_VIRTIO_DEBUG_DUMP
-#define VIRTQUEUE_DUMP(vq) do { \
+#define VIRTQUEUE_DUMP(vq) \
+	do { \
+	if (vtpci_packed_queue((vq)->hw)) break; \
 	uint16_t used_idx, nused; \
 	used_idx = (vq)->vq_ring.used->idx; \
 	nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \
-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 06/14] net/virtio-user: add option to use packed queues
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
                   ` (4 preceding siblings ...)
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 05/14] net/virtio: don't dump split virtqueue data Jens Freimann
@ 2018-01-29 14:11 ` Jens Freimann
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 07/14] net/virtio: implement transmit path for " Jens Freimann
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Add option to enable packed queue support for virtio-user
devices.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_user/virtio_user_dev.c | 12 ++++++++++--
 drivers/net/virtio/virtio_user/virtio_user_dev.h |  3 ++-
 drivers/net/virtio/virtio_user_ethdev.c          | 15 ++++++++++++++-
 3 files changed, 26 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 7a70c1867..05e3f2c28 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -305,11 +305,13 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
 	 1ULL << VIRTIO_NET_F_GUEST_CSUM	|	\
 	 1ULL << VIRTIO_NET_F_GUEST_TSO4	|	\
 	 1ULL << VIRTIO_NET_F_GUEST_TSO6	|	\
-	 1ULL << VIRTIO_F_VERSION_1)
+	 1ULL << VIRTIO_F_VERSION_1		|	\
+	 1ULL << VIRTIO_F_PACKED)
 
 int
 virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
-		     int cq, int queue_size, const char *mac, char **ifname)
+		     int cq, int queue_size, const char *mac, char **ifname,
+		     int version_1_1)
 {
 	snprintf(dev->path, PATH_MAX, "%s", path);
 	dev->max_queue_pairs = queues;
@@ -337,6 +339,12 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
 		PMD_INIT_LOG(ERR, "get_features failed: %s", strerror(errno));
 		return -1;
 	}
+
+	if (version_1_1)
+		dev->device_features |= (1ull << VIRTIO_F_PACKED);
+	else
+		dev->device_features &= ~(1ull << VIRTIO_F_PACKED);
+
 	if (dev->mac_specified)
 		dev->device_features |= (1ull << VIRTIO_NET_F_MAC);
 
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h
index 64467b4f9..ce1e1034c 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
@@ -42,7 +42,8 @@ int is_vhost_user_by_type(const char *path);
 int virtio_user_start_device(struct virtio_user_dev *dev);
 int virtio_user_stop_device(struct virtio_user_dev *dev);
 int virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
-			 int cq, int queue_size, const char *mac, char **ifname);
+			 int cq, int queue_size, const char *mac, char **ifname,
+			 int version_1_1);
 void virtio_user_dev_uninit(struct virtio_user_dev *dev);
 void virtio_user_handle_cq(struct virtio_user_dev *dev, uint16_t queue_idx);
 #endif
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 452cdfc88..59d786434 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -274,6 +274,8 @@ static const char *valid_args[] = {
 	VIRTIO_USER_ARG_QUEUE_SIZE,
 #define VIRTIO_USER_ARG_INTERFACE_NAME "iface"
 	VIRTIO_USER_ARG_INTERFACE_NAME,
+#define VIRTIO_USER_ARG_VERSION_1_1     "version_1_1"
+	VIRTIO_USER_ARG_VERSION_1_1,
 	NULL
 };
 
@@ -378,6 +380,7 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
 	char *ifname = NULL;
 	char *mac_addr = NULL;
 	int ret = -1;
+	uint64_t version_1_1 = 0;
 
 	kvlist = rte_kvargs_parse(rte_vdev_device_args(dev), valid_args);
 	if (!kvlist) {
@@ -452,6 +455,15 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
 		cq = 1;
 	}
 
+	if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_VERSION_1_1) == 1) {
+		if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_VERSION_1_1,
+				       &get_integer_arg, &version_1_1) < 0) {
+			PMD_INIT_LOG(ERR, "error to parse %s",
+				     VIRTIO_USER_ARG_VERSION_1_1);
+			goto end;
+		}
+	}
+
 	if (queues > 1 && cq == 0) {
 		PMD_INIT_LOG(ERR, "multi-q requires ctrl-q");
 		goto end;
@@ -473,7 +485,8 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
 
 		hw = eth_dev->data->dev_private;
 		if (virtio_user_dev_init(hw->virtio_user_dev, path, queues, cq,
-				 queue_size, mac_addr, &ifname) < 0) {
+				 queue_size, mac_addr, &ifname,
+				 version_1_1) < 0) {
 			PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
 			virtio_user_eth_dev_free(eth_dev);
 			goto end;
-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 07/14] net/virtio: implement transmit path for packed queues
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
                   ` (5 preceding siblings ...)
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 06/14] net/virtio-user: add option to use packed queues Jens Freimann
@ 2018-01-29 14:11 ` Jens Freimann
  2018-02-12 13:16   ` Jason Wang
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 08/14] net/virtio: implement receive " Jens Freimann
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

This implements the transmit path for devices with
support for Virtio 1.1.

Add the feature bit for Virtio 1.1 and enable code to
add buffers to vring and mark descriptors as available.

This is based on a patch by Yuanhan Liu.

Signed-off-by: Jens Freiman <jfreimann@redhat.com>
---
 drivers/net/virtio/Makefile          |   1 +
 drivers/net/virtio/virtio_ethdev.c   |  11 ++-
 drivers/net/virtio/virtio_ethdev.h   |   3 +
 drivers/net/virtio/virtio_rxtx.c     |   7 +-
 drivers/net/virtio/virtio_rxtx_1.1.c | 161 +++++++++++++++++++++++++++++++++++
 5 files changed, 180 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/virtio/virtio_rxtx_1.1.c

diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile
index b215ada97..482bb2c52 100644
--- a/drivers/net/virtio/Makefile
+++ b/drivers/net/virtio/Makefile
@@ -27,6 +27,7 @@ LIBABIVER := 1
 SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtqueue.c
 SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_pci.c
 SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx_1.1.c
 SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx_simple.c
 
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 76879d87f..b30e0d4a9 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -394,6 +394,8 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
 	vq->hw = hw;
 	vq->vq_queue_index = vtpci_queue_idx;
 	vq->vq_nentries = vq_size;
+	if (vtpci_packed_queue(hw))
+		vq->vq_ring.avail_wrap_counter = 1;
 
 	/*
 	 * Reserve a memzone for vring elements
@@ -614,7 +616,8 @@ virtio_dev_close(struct rte_eth_dev *dev)
 	}
 
 	vtpci_reset(hw);
-	virtio_dev_free_mbufs(dev);
+	if (!vtpci_packed_queue(hw))
+		virtio_dev_free_mbufs(dev);
 	virtio_free_queues(hw);
 }
 
@@ -1371,7 +1374,11 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
 		eth_dev->rx_pkt_burst = &virtio_recv_pkts;
 	}
 
-	if (hw->use_simple_tx) {
+	if (vtpci_packed_queue(hw)) {
+		PMD_INIT_LOG(INFO, "virtio: using virtio 1.1 Tx path on port %u",
+			eth_dev->data->port_id);
+		eth_dev->tx_pkt_burst = virtio_xmit_pkts_1_1;
+	} else if (hw->use_simple_tx) {
 		PMD_INIT_LOG(INFO, "virtio: using simple Tx path on port %u",
 			eth_dev->data->port_id);
 		eth_dev->tx_pkt_burst = virtio_xmit_pkts_simple;
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index 4539d2e44..0fee979cb 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -36,6 +36,7 @@
 	 1ULL << VIRTIO_NET_F_GUEST_ANNOUNCE |	\
 	 1u << VIRTIO_RING_F_INDIRECT_DESC |    \
 	 1ULL << VIRTIO_F_VERSION_1       |	\
+	 1ULL << VIRTIO_F_PACKED          |	\
 	 1ULL << VIRTIO_F_IOMMU_PLATFORM)
 
 #define VIRTIO_PMD_SUPPORTED_GUEST_FEATURES	\
@@ -77,6 +78,8 @@ uint16_t virtio_recv_mergeable_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		uint16_t nb_pkts);
+uint16_t virtio_xmit_pkts_1_1(void *tx_queue, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts);
 
 uint16_t virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 854af399e..39489e3f6 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -541,6 +541,10 @@ virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev,
 
 	PMD_INIT_FUNC_TRACE();
 
+	if (vtpci_packed_queue(hw)) {
+		vq->vq_ring.avail_wrap_counter = 1;
+	}
+
 	if (hw->use_simple_tx) {
 		for (desc_idx = 0; desc_idx < mid_idx; desc_idx++) {
 			vq->vq_ring.avail->ring[desc_idx] =
@@ -561,7 +565,8 @@ virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev,
 			vq->vq_ring.avail->ring[desc_idx] = desc_idx;
 	}
 
-	VIRTQUEUE_DUMP(vq);
+	if (!vtpci_packed_queue(hw))
+		VIRTQUEUE_DUMP(vq);
 
 	return 0;
 }
diff --git a/drivers/net/virtio/virtio_rxtx_1.1.c b/drivers/net/virtio/virtio_rxtx_1.1.c
new file mode 100644
index 000000000..97a502212
--- /dev/null
+++ b/drivers/net/virtio/virtio_rxtx_1.1.c
@@ -0,0 +1,161 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+
+#include <rte_cycles.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_branch_prediction.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_prefetch.h>
+#include <rte_string_fns.h>
+#include <rte_errno.h>
+#include <rte_byteorder.h>
+#include <rte_cpuflags.h>
+#include <rte_net.h>
+#include <rte_ip.h>
+#include <rte_udp.h>
+#include <rte_tcp.h>
+
+#include "virtio_logs.h"
+#include "virtio_ethdev.h"
+#include "virtio_pci.h"
+#include "virtqueue.h"
+#include "virtio_rxtx.h"
+#include "virtio_ring.h"
+
+/* Cleanup from completed transmits. */
+static void
+virtio_xmit_cleanup(struct virtqueue *vq)
+{
+	uint16_t idx;
+	uint16_t size = vq->vq_nentries;
+	struct vring_desc_1_1 *desc = vq->vq_ring.desc_1_1;
+
+	idx = vq->vq_used_cons_idx & (size - 1);
+	while (desc_is_used(&desc[idx]) &&
+	       vq->vq_free_cnt < size) {
+		while (desc[idx].flags & VRING_DESC_F_NEXT) {
+			vq->vq_free_cnt++;
+			idx = ++vq->vq_used_cons_idx & (size - 1);
+		}
+		vq->vq_free_cnt++;
+		idx = ++vq->vq_used_cons_idx & (size - 1);
+	}
+}
+
+uint16_t
+virtio_xmit_pkts_1_1(void *tx_queue, struct rte_mbuf **tx_pkts,
+		     uint16_t nb_pkts)
+{
+	struct virtnet_tx *txvq = tx_queue;
+	struct virtqueue *vq = txvq->vq;
+	uint16_t i;
+	struct vring_desc_1_1 *desc = vq->vq_ring.desc_1_1;
+	uint16_t idx;
+	struct vq_desc_extra *dxp;
+
+	if (unlikely(nb_pkts < 1))
+		return nb_pkts;
+
+	PMD_TX_LOG(DEBUG, "%d packets to xmit", nb_pkts);
+
+	if (likely(vq->vq_free_cnt < vq->vq_free_thresh))
+		virtio_xmit_cleanup(vq);
+
+	for (i = 0; i < nb_pkts; i++) {
+		struct rte_mbuf *txm = tx_pkts[i];
+		struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;
+		uint16_t head_idx;
+		int wrap_counter;
+
+		if (unlikely(txm->nb_segs + 1 > vq->vq_free_cnt)) {
+			virtio_xmit_cleanup(vq);
+
+			if (unlikely(txm->nb_segs + 1 > vq->vq_free_cnt)) {
+				PMD_TX_LOG(ERR,
+					   "No free tx descriptors to transmit");
+				break;
+			}
+		}
+
+		txvq->stats.bytes += txm->pkt_len;
+
+		vq->vq_free_cnt -= txm->nb_segs + 1;
+
+		idx = (vq->vq_avail_idx++) & (vq->vq_nentries - 1);
+		head_idx = idx;
+		wrap_counter = vq->vq_ring.avail_wrap_counter;
+
+		if ((vq->vq_avail_idx & (vq->vq_nentries - 1)) == 0)
+			toggle_wrap_counter(&vq->vq_ring);
+
+		dxp = &vq->vq_descx[idx];
+		if (dxp->cookie != NULL)
+			rte_pktmbuf_free(dxp->cookie);
+		dxp->cookie = txm;
+
+		desc[idx].addr  = txvq->virtio_net_hdr_mem +
+				  RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
+		desc[idx].len   = vq->hw->vtnet_hdr_size;
+		desc[idx].flags |= VRING_DESC_F_NEXT;
+
+		do {
+			idx = (vq->vq_avail_idx++) & (vq->vq_nentries - 1);
+			desc[idx].addr  = VIRTIO_MBUF_DATA_DMA_ADDR(txm, vq);
+			desc[idx].len   = txm->data_len;
+			desc[idx].flags |= VRING_DESC_F_NEXT;
+			if (idx == (vq->vq_nentries - 1))
+				toggle_wrap_counter(&vq->vq_ring);
+		} while ((txm = txm->next) != NULL);
+
+		desc[idx].flags &= ~VRING_DESC_F_NEXT;
+
+		rte_smp_wmb();
+		_set_desc_avail(&desc[head_idx], wrap_counter);
+	}
+
+	txvq->stats.packets += i;
+	txvq->stats.errors  += nb_pkts - i;
+
+	return i;
+}
-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 08/14] net/virtio: implement receive path for packed queues
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
                   ` (6 preceding siblings ...)
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 07/14] net/virtio: implement transmit path for " Jens Freimann
@ 2018-01-29 14:11 ` Jens Freimann
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 09/14] vhost: add virtio 1.1 defines Jens Freimann
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Implement the receive part here. No support for mergeable buffers yet.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_ethdev.c |   5 +-
 drivers/net/virtio/virtio_ethdev.h |   2 +
 drivers/net/virtio/virtio_rxtx.c   | 134 +++++++++++++++++++++++++++++++++++++
 3 files changed, 140 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index b30e0d4a9..7a6dd1586 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1363,6 +1363,8 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
 		PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u",
 			eth_dev->data->port_id);
 		eth_dev->rx_pkt_burst = virtio_recv_pkts_vec;
+	} else if (vtpci_packed_queue(hw)) {
+		eth_dev->rx_pkt_burst = &virtio_recv_pkts_1_1;
 	} else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
 		PMD_INIT_LOG(INFO,
 			"virtio: using mergeable buffer Rx path on port %u",
@@ -1513,7 +1515,8 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 
 	/* Setting up rx_header size for the device */
 	if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF) ||
-	    vtpci_with_feature(hw, VIRTIO_F_VERSION_1))
+	    vtpci_with_feature(hw, VIRTIO_F_VERSION_1) ||
+	    vtpci_with_feature(hw, VIRTIO_F_PACKED))
 		hw->vtnet_hdr_size = sizeof(struct virtio_net_hdr_mrg_rxbuf);
 	else
 		hw->vtnet_hdr_size = sizeof(struct virtio_net_hdr);
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index 0fee979cb..c7d5228a2 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -72,6 +72,8 @@ int virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev,
 
 uint16_t virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts);
+uint16_t virtio_recv_pkts_1_1(void *rx_queue, struct rte_mbuf **rx_pkts,
+		uint16_t nb_pkts);
 
 uint16_t virtio_recv_mergeable_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 39489e3f6..70d393412 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -30,6 +30,7 @@
 #include "virtio_pci.h"
 #include "virtqueue.h"
 #include "virtio_rxtx.h"
+#include "virtio_ring.h"
 
 #ifdef RTE_LIBRTE_VIRTIO_DEBUG_DUMP
 #define VIRTIO_DUMP_PACKET(m, len) rte_pktmbuf_dump(stdout, m, len)
@@ -427,6 +428,34 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)
 
 	PMD_INIT_FUNC_TRACE();
 
+	if (vtpci_packed_queue(hw)) {
+		struct vring_desc_1_1 *desc;
+		struct vq_desc_extra *dxp;
+
+		for (desc_idx = 0; desc_idx < vq->vq_nentries;
+				desc_idx++) {
+			m = rte_mbuf_raw_alloc(rxvq->mpool);
+			if (unlikely(m == NULL))
+				return -ENOMEM;
+
+			dxp = &vq->vq_descx[desc_idx];
+			dxp->cookie = m;
+			dxp->ndescs = 1;
+
+			desc = &vq->vq_ring.desc_1_1[desc_idx];
+			desc->addr = VIRTIO_MBUF_ADDR(m, vq) +
+				RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
+			desc->len = m->buf_len - RTE_PKTMBUF_HEADROOM +
+				hw->vtnet_hdr_size;
+			desc->flags |= VRING_DESC_F_WRITE;
+			rte_smp_wmb();
+			set_desc_avail(&vq->vq_ring, desc);
+		}
+		toggle_wrap_counter(&vq->vq_ring);
+
+		return 0;
+	}
+
 	/* Allocate blank mbufs for the each rx descriptor */
 	nbufs = 0;
 
@@ -698,6 +727,111 @@ rx_offload_enabled(struct virtio_hw *hw)
 		vtpci_with_feature(hw, VIRTIO_NET_F_GUEST_TSO6);
 }
 
+uint16_t
+virtio_recv_pkts_1_1(void *rx_queue, struct rte_mbuf **rx_pkts,
+		     uint16_t nb_pkts)
+{
+	struct virtnet_rx *rxvq = rx_queue;
+	struct virtqueue *vq = rxvq->vq;
+	struct virtio_hw *hw = vq->hw;
+	struct rte_mbuf *rxm, *nmb;
+	uint16_t nb_rx;
+	uint32_t len;
+	uint32_t i;
+	uint32_t hdr_size;
+	int offload;
+	struct virtio_net_hdr *hdr;
+	struct vring_desc_1_1 *descs = vq->vq_ring.desc_1_1;
+	struct vring_desc_1_1 *desc;
+	uint16_t used_idx = vq->vq_used_cons_idx;
+	struct vq_desc_extra *dxp;
+
+	nb_rx = 0;
+	if (unlikely(hw->started == 0))
+		return nb_rx;
+
+	hdr_size = hw->vtnet_hdr_size;
+	offload = rx_offload_enabled(hw);
+
+	for (i = 0; i < nb_pkts; i++) {
+		desc = &descs[used_idx & (vq->vq_nentries - 1)];
+		if (!desc_is_used(desc))
+			break;
+
+		rte_smp_rmb();
+
+		nmb = rte_mbuf_raw_alloc(rxvq->mpool);
+		if (unlikely(nmb == NULL)) {
+			struct rte_eth_dev *dev
+				= &rte_eth_devices[rxvq->port_id];
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+
+		dxp = &vq->vq_descx[used_idx & (vq->vq_nentries - 1)];
+
+		len = desc->len;
+		rxm = dxp->cookie;
+		dxp->cookie = nmb;
+		dxp->ndescs = 1;
+
+		desc->addr = VIRTIO_MBUF_ADDR(nmb, vq) +
+			RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
+		desc->len = nmb->buf_len - RTE_PKTMBUF_HEADROOM +
+			hw->vtnet_hdr_size;
+		desc->flags |= VRING_DESC_F_WRITE;
+
+		PMD_RX_LOG(DEBUG, "packet len:%d", len);
+
+		if (unlikely(len < hdr_size + ETHER_HDR_LEN)) {
+			PMD_RX_LOG(ERR, "Packet drop");
+			rte_pktmbuf_free(rxm);
+			rxvq->stats.errors++;
+			continue;
+		}
+
+		rxm->port = rxvq->port_id;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rxm->ol_flags = 0;
+		rxm->vlan_tci = 0;
+
+		rxm->pkt_len = (uint32_t)(len - hdr_size);
+		rxm->data_len = (uint16_t)(len - hdr_size);
+
+		hdr = (struct virtio_net_hdr *)((char *)rxm->buf_addr +
+			RTE_PKTMBUF_HEADROOM - hdr_size);
+
+		if (hw->vlan_strip)
+			rte_vlan_strip(rxm);
+
+		if (offload && virtio_rx_offload(rxm, hdr) < 0) {
+			rte_pktmbuf_free(rxm);
+			rxvq->stats.errors++;
+			continue;
+		}
+
+		VIRTIO_DUMP_PACKET(rxm, rxm->data_len);
+
+		rxvq->stats.bytes += rxm->pkt_len;
+		virtio_update_packet_stats(&rxvq->stats, rxm);
+
+		rte_smp_wmb();
+		set_desc_avail(&vq->vq_ring, desc);
+
+		rx_pkts[nb_rx++] = rxm;
+
+		used_idx++;
+		if ((used_idx & (vq->vq_nentries - 1)) == 0)
+			toggle_wrap_counter(&vq->vq_ring);
+	}
+
+	rxvq->stats.packets += nb_rx;
+
+	vq->vq_used_cons_idx = used_idx;
+
+	return nb_rx;
+}
+
 #define VIRTIO_MBUF_BURST_SZ 64
 #define DESC_PER_CACHELINE (RTE_CACHE_LINE_SIZE / sizeof(struct vring_desc))
 uint16_t
-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 09/14] vhost: add virtio 1.1 defines
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
                   ` (7 preceding siblings ...)
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 08/14] net/virtio: implement receive " Jens Freimann
@ 2018-01-29 14:11 ` Jens Freimann
  2018-01-29 14:24   ` Yuanhan Liu
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 10/14] vhost: vring address setup for packed queues Jens Freimann
                   ` (5 subsequent siblings)
  14 siblings, 1 reply; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

This should actually be in the kernel header file, but it isn't
yet. For now let's use our own headers.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
 lib/librte_vhost/vhost.h      |  4 +++-
 lib/librte_vhost/virtio-1.1.h | 20 ++++++++++++++++++++
 2 files changed, 23 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_vhost/virtio-1.1.h

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index e52a9b69c..152e8bce1 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -167,7 +167,9 @@ struct vhost_msg {
 #ifndef VIRTIO_F_VERSION_1
  #define VIRTIO_F_VERSION_1 32
 #endif
-
+#ifndef VIRTIO_F_PACKED
+ #define VIRTIO_F_PACKED 34
+#endif
 #define VHOST_USER_F_PROTOCOL_FEATURES	30
 
 /* Features supported by this builtin vhost-user net driver. */
diff --git a/lib/librte_vhost/virtio-1.1.h b/lib/librte_vhost/virtio-1.1.h
new file mode 100644
index 000000000..5ca0bc33f
--- /dev/null
+++ b/lib/librte_vhost/virtio-1.1.h
@@ -0,0 +1,20 @@
+#ifndef __VIRTIO_1_1_H
+#define __VIRTIO_1_1_H
+
+#define VRING_DESC_F_NEXT       1
+#define VRING_DESC_F_WRITE      2
+#define VRING_DESC_F_INDIRECT   4
+
+#define VIRTQ_DESC_F_AVAIL      7
+#define VIRTQ_DESC_F_USED      15
+#define DESC_USED               (1ULL << VIRTQ_DESC_F_USED)
+#define DESC_AVAIL              (1ULL << VIRTQ_DESC_F_AVAIL)
+
+struct vring_desc_1_1 {
+	uint64_t addr;
+	uint32_t len;
+	uint16_t index;
+	uint16_t flags;
+};
+
+#endif /* __VIRTIO_1_1_H */
-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 10/14] vhost: vring address setup for packed queues
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
                   ` (8 preceding siblings ...)
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 09/14] vhost: add virtio 1.1 defines Jens Freimann
@ 2018-01-29 14:11 ` Jens Freimann
  2018-02-01  9:16   ` Maxime Coquelin
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 11/14] vhost: add helpers for packed virtqueues Jens Freimann
                   ` (4 subsequent siblings)
  14 siblings, 1 reply; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Add code to set up packed queues when enabled.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Jens Freimann <jfreiman@redhat.com>
---
 lib/librte_vhost/vhost.c      |  4 ++++
 lib/librte_vhost/vhost.h      |  1 +
 lib/librte_vhost/vhost_user.c | 17 ++++++++++++++++-
 3 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index 1dd9adbc7..78913912c 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -536,6 +536,9 @@ rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable)
 {
 	struct virtio_net *dev = get_device(vid);
 
+	if (dev->features & (1ULL << VIRTIO_F_PACKED))
+		return 0;
+
 	if (dev == NULL)
 		return -1;
 
@@ -545,6 +548,7 @@ rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable)
 		return -1;
 	}
 
+
 	dev->virtqueue[queue_id]->used->flags = VRING_USED_F_NO_NOTIFY;
 	return 0;
 }
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 152e8bce1..8554d51d8 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -68,6 +68,7 @@ struct batch_copy_elem {
  */
 struct vhost_virtqueue {
 	struct vring_desc	*desc;
+	struct vring_desc_1_1   *desc_1_1;
 	struct vring_avail	*avail;
 	struct vring_used	*used;
 	uint32_t		size;
diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index 1dd1a61b6..979bffe1a 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -414,6 +414,19 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
 	struct vhost_virtqueue *vq = dev->virtqueue[vq_index];
 	struct vhost_vring_addr *addr = &vq->ring_addrs;
 
+	if (dev->features & (1ULL << VIRTIO_F_PACKED)) {
+		vq->desc_1_1 = (struct vring_desc_1_1 *) ring_addr_to_vva
+			(dev, vq, addr->desc_user_addr, sizeof(vq->desc_1_1));
+		vq->desc = NULL;
+		vq->avail = NULL;
+		vq->used = NULL;
+		vq->log_guest_addr = 0;
+
+		assert(vq->last_used_idx == 0);
+
+		return dev;
+	}
+
 	/* The addresses are converted from QEMU virtual to Vhost virtual. */
 	if (vq->desc && vq->avail && vq->used)
 		return dev;
@@ -426,6 +439,7 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
 			dev->vid);
 		return dev;
 	}
+	vq->desc_1_1 = NULL;
 
 	dev = numa_realloc(dev, vq_index);
 	vq = dev->virtqueue[vq_index];
@@ -763,7 +777,8 @@ vhost_user_set_mem_table(struct virtio_net *dev, struct VhostUserMsg *pmsg)
 static int
 vq_is_ready(struct vhost_virtqueue *vq)
 {
-	return vq && vq->desc && vq->avail && vq->used &&
+	return vq &&
+	       (vq->desc_1_1 || (vq->desc && vq->avail && vq->used)) &&
 	       vq->kickfd != VIRTIO_UNINITIALIZED_EVENTFD &&
 	       vq->callfd != VIRTIO_UNINITIALIZED_EVENTFD;
 }
-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 11/14] vhost: add helpers for packed virtqueues
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
                   ` (9 preceding siblings ...)
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 10/14] vhost: vring address setup for packed queues Jens Freimann
@ 2018-01-29 14:11 ` Jens Freimann
  2018-02-01  9:20   ` Maxime Coquelin
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 12/14] vhost: dequeue for packed queues Jens Freimann
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

Add some helper functions to set/check descriptor flags
and toggle the used wrap counter.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
 lib/librte_vhost/virtio-1.1.h | 43 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/lib/librte_vhost/virtio-1.1.h b/lib/librte_vhost/virtio-1.1.h
index 5ca0bc33f..84039797e 100644
--- a/lib/librte_vhost/virtio-1.1.h
+++ b/lib/librte_vhost/virtio-1.1.h
@@ -17,4 +17,47 @@ struct vring_desc_1_1 {
 	uint16_t flags;
 };
 
+static inline void
+toggle_wrap_counter(struct vhost_virtqueue *vq)
+{
+	vq->used_wrap_counter ^= 1;
+}
+
+static inline int
+desc_is_avail(struct vhost_virtqueue *vq, struct vring_desc_1_1 *desc)
+{
+	if (!vq)
+		return -1;
+
+	if (vq->used_wrap_counter == 1)
+		if ((desc->flags & DESC_AVAIL) && !(desc->flags & DESC_USED))
+			return 1;
+	if (vq->used_wrap_counter == 0)
+		if (!(desc->flags & DESC_AVAIL) && (desc->flags & DESC_USED))
+			return 1;
+	return 0;
+}
+
+static inline void
+_set_desc_used(struct vring_desc_1_1 *desc, int wrap_counter)
+{
+	uint16_t flags = desc->flags;
+
+	if (wrap_counter == 1) {
+		flags |= DESC_USED;
+		flags |= DESC_AVAIL;
+	} else {
+		flags &= ~DESC_USED;
+		flags &= ~DESC_AVAIL;
+	}
+
+	desc->flags = flags;
+}
+
+static inline void
+set_desc_used(struct vhost_virtqueue *vq, struct vring_desc_1_1 *desc)
+{
+	_set_desc_used(desc, vq->used_wrap_counter);
+}
+
 #endif /* __VIRTIO_1_1_H */
-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 12/14] vhost: dequeue for packed queues
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
                   ` (10 preceding siblings ...)
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 11/14] vhost: add helpers for packed virtqueues Jens Freimann
@ 2018-01-29 14:11 ` Jens Freimann
  2018-02-01  9:35   ` Maxime Coquelin
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 13/14] vhost: packed queue enqueue path Jens Freimann
                   ` (2 subsequent siblings)
  14 siblings, 1 reply; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

Implement code to dequeue and process descriptors from
the vring if VIRTIO_F_PACKED is enabled.

Check if descriptor was made available by driver by looking at
VIRTIO_F_DESC_AVAIL flag in descriptor. If so dequeue and set
the used flag VIRTIO_F_DESC_USED to the current value of the
used wrap counter.

Used ring wrap counter needs to be toggled when last descriptor is
written out. This allows the host/guest to detect new descriptors even
after the ring has wrapped.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
 lib/librte_vhost/vhost.c      |   1 +
 lib/librte_vhost/vhost.h      |   1 +
 lib/librte_vhost/virtio_net.c | 194 ++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 196 insertions(+)

diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index 78913912c..e5f58d9c8 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -191,6 +191,7 @@ init_vring_queue(struct virtio_net *dev, uint32_t vring_idx)
 
 	vq->kickfd = VIRTIO_UNINITIALIZED_EVENTFD;
 	vq->callfd = VIRTIO_UNINITIALIZED_EVENTFD;
+	vq->used_wrap_counter = 1;
 
 	vhost_user_iotlb_init(dev, vring_idx);
 	/* Backends are set to -1 indicating an inactive device. */
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 8554d51d8..a3d4214b6 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -106,6 +106,7 @@ struct vhost_virtqueue {
 
 	struct batch_copy_elem	*batch_copy_elems;
 	uint16_t		batch_copy_nb_elems;
+	uint32_t		used_wrap_counter;
 
 	rte_rwlock_t	iotlb_lock;
 	rte_rwlock_t	iotlb_pending_lock;
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index edfab3ba6..5d4cfe8cc 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -19,6 +19,7 @@
 
 #include "iotlb.h"
 #include "vhost.h"
+#include "virtio-1.1.h"
 
 #define MAX_PKT_BURST 32
 
@@ -1111,6 +1112,199 @@ restore_mbuf(struct rte_mbuf *m)
 	}
 }
 
+static inline uint16_t
+dequeue_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
+	     struct rte_mempool *mbuf_pool, struct rte_mbuf *m,
+	     struct vring_desc_1_1 *descs)
+{
+	struct vring_desc_1_1 *desc;
+	uint64_t desc_addr;
+	uint32_t desc_avail, desc_offset;
+	uint32_t mbuf_avail, mbuf_offset;
+	uint32_t cpy_len;
+	struct rte_mbuf *cur = m, *prev = m;
+	struct virtio_net_hdr *hdr = NULL;
+	uint16_t head_idx = vq->last_used_idx & (vq->size - 1);
+	int wrap_counter = vq->used_wrap_counter;
+
+	desc = &descs[vq->last_used_idx & (vq->size - 1)];
+	if (unlikely((desc->len < dev->vhost_hlen)) ||
+			(desc->flags & VRING_DESC_F_INDIRECT))
+		rte_panic("INDIRECT not supported yet");
+
+	desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
+	if (unlikely(!desc_addr))
+		return -1;
+
+	if (virtio_net_with_host_offload(dev)) {
+		hdr = (struct virtio_net_hdr *)((uintptr_t)desc_addr);
+		rte_prefetch0(hdr);
+	}
+
+	/*
+	 * A virtio driver normally uses at least 2 desc buffers
+	 * for Tx: the first for storing the header, and others
+	 * for storing the data.
+	 */
+	if (likely((desc->len == dev->vhost_hlen) &&
+		   (desc->flags & VRING_DESC_F_NEXT) != 0)) {
+		if ((++vq->last_used_idx & (vq->size - 1)) == 0)
+			toggle_wrap_counter(vq);
+
+		desc = &descs[vq->last_used_idx & (vq->size - 1)];
+
+		if (unlikely(desc->flags & VRING_DESC_F_INDIRECT))
+			rte_panic("INDIRECT not supported yet");
+
+		desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
+		if (unlikely(!desc_addr))
+			return -1;
+
+		desc_offset = 0;
+		desc_avail  = desc->len;
+	} else {
+		desc_avail  = desc->len - dev->vhost_hlen;
+		desc_offset = dev->vhost_hlen;
+	}
+
+	rte_prefetch0((void *)(uintptr_t)(desc_addr + desc_offset));
+
+	PRINT_PACKET(dev, (uintptr_t)(desc_addr + desc_offset), desc_avail, 0);
+
+	mbuf_offset = 0;
+	mbuf_avail  = m->buf_len - RTE_PKTMBUF_HEADROOM;
+	while (1) {
+		uint64_t hpa;
+
+		cpy_len = RTE_MIN(desc_avail, mbuf_avail);
+
+		/*
+		 * A desc buf might across two host physical pages that are
+		 * not continuous. In such case (gpa_to_hpa returns 0), data
+		 * will be copied even though zero copy is enabled.
+		 */
+		if (unlikely(dev->dequeue_zero_copy && (hpa = gpa_to_hpa(dev,
+					desc->addr + desc_offset, cpy_len)))) {
+			cur->data_len = cpy_len;
+			cur->data_off = 0;
+			cur->buf_addr = (void *)(uintptr_t)desc_addr;
+			cur->buf_physaddr = hpa;
+
+			/*
+			 * In zero copy mode, one mbuf can only reference data
+			 * for one or partial of one desc buff.
+			 */
+			mbuf_avail = cpy_len;
+		} else {
+			rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *,
+							   mbuf_offset),
+				(void *)((uintptr_t)(desc_addr + desc_offset)),
+				cpy_len);
+		}
+
+		mbuf_avail  -= cpy_len;
+		mbuf_offset += cpy_len;
+		desc_avail  -= cpy_len;
+		desc_offset += cpy_len;
+
+		/* This desc reaches to its end, get the next one */
+		if (desc_avail == 0) {
+			if ((desc->flags & VRING_DESC_F_NEXT) == 0)
+				break;
+
+			if ((++vq->last_used_idx & (vq->size - 1)) == 0)
+				toggle_wrap_counter(vq);
+
+			desc = &descs[vq->last_used_idx & (vq->size - 1)];
+			if (unlikely(desc->flags & VRING_DESC_F_INDIRECT))
+				rte_panic("INDIRECT not supported yet");
+
+			desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
+			if (unlikely(!desc_addr))
+				return -1;
+
+			rte_prefetch0((void *)(uintptr_t)desc_addr);
+
+			desc_offset = 0;
+			desc_avail  = desc->len;
+
+			PRINT_PACKET(dev, (uintptr_t)desc_addr, desc->len, 0);
+		}
+
+		/*
+		 * This mbuf reaches to its end, get a new one
+		 * to hold more data.
+		 */
+		if (mbuf_avail == 0) {
+			cur = rte_pktmbuf_alloc(mbuf_pool);
+			if (unlikely(cur == NULL)) {
+				RTE_LOG(ERR, VHOST_DATA, "Failed to "
+					"allocate memory for mbuf.\n");
+				return -1;
+			}
+
+			prev->next = cur;
+			prev->data_len = mbuf_offset;
+			m->nb_segs += 1;
+			m->pkt_len += mbuf_offset;
+			prev = cur;
+
+			mbuf_offset = 0;
+			mbuf_avail  = cur->buf_len - RTE_PKTMBUF_HEADROOM;
+		}
+	}
+
+	if (hdr)
+		vhost_dequeue_offload(hdr, m);
+
+	if ((++vq->last_used_idx & (vq->size - 1)) == 0)
+		toggle_wrap_counter(vq);
+
+	rte_smp_wmb();
+	_set_desc_used(&descs[head_idx], wrap_counter);
+
+	prev->data_len = mbuf_offset;
+	m->pkt_len    += mbuf_offset;
+
+	return 0;
+}
+
+static inline uint16_t
+vhost_dequeue_burst_1_1(struct virtio_net *dev, struct vhost_virtqueue *vq,
+			struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts,
+			uint16_t count)
+{
+	uint16_t i;
+	uint16_t idx;
+	struct vring_desc_1_1 *desc = vq->desc_1_1;
+	int err;
+
+	count = RTE_MIN(MAX_PKT_BURST, count);
+	for (i = 0; i < count; i++) {
+		idx = vq->last_used_idx & (vq->size - 1);
+		if (!desc_is_avail(vq, &desc[idx]))
+			break;
+		rte_smp_rmb();
+
+		pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
+		if (unlikely(pkts[i] == NULL)) {
+			RTE_LOG(ERR, VHOST_DATA,
+				"Failed to allocate memory for mbuf.\n");
+			break;
+		}
+
+		err = dequeue_desc(dev, vq, mbuf_pool, pkts[i], desc);
+		if (unlikely(err)) {
+			rte_pktmbuf_free(pkts[i]);
+			break;
+		}
+	}
+
+	rte_spinlock_unlock(&vq->access_lock);
+
+	return i;
+}
+
 uint16_t
 rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 13/14] vhost: packed queue enqueue path
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
                   ` (11 preceding siblings ...)
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 12/14] vhost: dequeue for packed queues Jens Freimann
@ 2018-01-29 14:11 ` Jens Freimann
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 14/14] vhost: enable packed virtqueues Jens Freimann
  2018-02-06  6:20 ` [dpdk-dev] [PATCH 00/14] implement " Jason Wang
  14 siblings, 0 replies; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

Implement enqueue of packets to the receive virtqueue.

Set descriptor flag VIRTQ_DESC_F_USED and toggle used wrap counter if
last descriptor in ring is used. Perform a write memory barrier before
flags are written to descriptor.

Chained descriptors are not supported with this patch.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 120 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 120 insertions(+)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 5d4cfe8cc..c1b77fff5 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -695,6 +695,126 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id,
 	return pkt_idx;
 }
 
+static inline uint32_t __attribute__((always_inline))
+vhost_enqueue_burst_1_1(struct virtio_net *dev, uint16_t queue_id,
+	      struct rte_mbuf **pkts, uint32_t count)
+{
+	struct vhost_virtqueue *vq;
+	struct vring_desc_1_1 *descs;
+	uint16_t idx;
+	uint16_t mask;
+	uint16_t i;
+
+	vq = dev->virtqueue[queue_id];
+
+	rte_spinlock_lock(&vq->access_lock);
+
+	if (unlikely(vq->enabled == 0)) {
+		i = 0;
+		goto out_access_unlock;
+	}
+
+	descs = vq->desc_1_1;
+	mask = vq->size - 1;
+
+	for (i = 0; i < count; i++) {
+		uint32_t desc_avail, desc_offset;
+		uint32_t mbuf_avail, mbuf_offset;
+		uint32_t cpy_len;
+		struct vring_desc_1_1 *desc;
+		uint64_t desc_addr;
+		struct virtio_net_hdr_mrg_rxbuf *hdr;
+		struct rte_mbuf *m = pkts[i];
+
+		/* XXX: there is an assumption that no desc will be chained */
+		idx = vq->last_used_idx & mask;
+		desc = &descs[idx];
+
+		if (!desc_is_avail(vq, desc))
+			break;
+		rte_smp_rmb();
+
+		desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
+		/*
+		 * Checking of 'desc_addr' placed outside of 'unlikely' macro
+		 * to avoid performance issue with some versions of gcc (4.8.4
+		 * and 5.3.0) which otherwise stores offset on the stack instead
+		 * of in a register.
+		 */
+		if (unlikely(desc->len < dev->vhost_hlen) || !desc_addr)
+			break;
+
+		hdr = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)desc_addr;
+		virtio_enqueue_offload(m, &hdr->hdr);
+		vhost_log_write(dev, desc->addr, dev->vhost_hlen);
+		PRINT_PACKET(dev, (uintptr_t)desc_addr, dev->vhost_hlen, 0);
+
+		desc_offset = dev->vhost_hlen;
+		desc_avail  = desc->len - dev->vhost_hlen;
+
+		mbuf_avail  = rte_pktmbuf_data_len(m);
+		mbuf_offset = 0;
+		while (mbuf_avail != 0 || m->next != NULL) {
+			/* done with current mbuf, fetch next */
+			if (mbuf_avail == 0) {
+				m = m->next;
+
+				mbuf_offset = 0;
+				mbuf_avail  = rte_pktmbuf_data_len(m);
+			}
+
+			/* done with current desc buf, fetch next */
+			if (desc_avail == 0) {
+				if ((desc->flags & VRING_DESC_F_NEXT) == 0) {
+					/* Room in vring buffer is not enough */
+					goto end_of_tx;
+				}
+
+				idx = (idx + 1);
+				desc = &descs[idx];
+				if (unlikely(!desc_is_avail(vq, desc)))
+					goto end_of_tx;
+
+				desc_addr = rte_vhost_gpa_to_vva(dev->mem,
+								 desc->addr);
+				if (unlikely(!desc_addr))
+					goto end_of_tx;
+
+				desc_offset = 0;
+				desc_avail  = desc->len;
+			}
+
+			cpy_len = RTE_MIN(desc_avail, mbuf_avail);
+			rte_memcpy((void *)((uintptr_t)(desc_addr + desc_offset)),
+				rte_pktmbuf_mtod_offset(m, void *, mbuf_offset),
+				cpy_len);
+			vhost_log_write(dev, desc->addr + desc_offset, cpy_len);
+			PRINT_PACKET(dev, (uintptr_t)(desc_addr + desc_offset),
+				     cpy_len, 0);
+
+			mbuf_avail  -= cpy_len;
+			mbuf_offset += cpy_len;
+			desc_avail  -= cpy_len;
+			desc_offset += cpy_len;
+		}
+
+		descs[idx].len = pkts[i]->pkt_len + dev->vhost_hlen;
+		rte_smp_wmb();
+		set_desc_used(vq, desc);
+
+		vq->last_used_idx++;
+		if ((vq->last_used_idx & (vq->size - 1)) == 0)
+			toggle_wrap_counter(vq);
+	}
+
+out_access_unlock:
+	rte_spinlock_unlock(&vq->access_lock);
+end_of_tx:
+	count = i;
+
+	return count;
+}
+
 uint16_t
 rte_vhost_enqueue_burst(int vid, uint16_t queue_id,
 	struct rte_mbuf **pkts, uint16_t count)
-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [dpdk-dev] [PATCH 14/14] vhost: enable packed virtqueues
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
                   ` (12 preceding siblings ...)
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 13/14] vhost: packed queue enqueue path Jens Freimann
@ 2018-01-29 14:11 ` Jens Freimann
  2018-01-30 10:45   ` Jens Freimann
  2018-02-06  6:20 ` [dpdk-dev] [PATCH 00/14] implement " Jason Wang
  14 siblings, 1 reply; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:11 UTC (permalink / raw)
  To: dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

This patch enables the code do enqueue and dequeue packed to/from a
packed virtqueue.  Add feature bit for packed virtqueues as defined in
Virtio 1.1 draft.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 lib/librte_vhost/vhost.h      | 1 +
 lib/librte_vhost/virtio_net.c | 7 ++++++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index a3d4214b6..fbde54f9a 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -181,6 +181,7 @@ struct vhost_msg {
 				(1ULL << VIRTIO_NET_F_GUEST_ANNOUNCE) | \
 				(1ULL << VIRTIO_NET_F_MQ)      | \
 				(1ULL << VIRTIO_F_VERSION_1)   | \
+				(1ULL << VIRTIO_F_PACKED)      | \
 				(1ULL << VHOST_F_LOG_ALL)      | \
 				(1ULL << VHOST_USER_F_PROTOCOL_FEATURES) | \
 				(1ULL << VIRTIO_NET_F_GSO) | \
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index c1b77fff5..309178b3e 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -824,7 +824,9 @@ rte_vhost_enqueue_burst(int vid, uint16_t queue_id,
 	if (!dev)
 		return 0;
 
-	if (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF))
+	if (dev->features & (1ULL << VIRTIO_F_PACKED))
+		return vhost_enqueue_burst_1_1(dev, queue_id, pkts, count);
+	else if (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF))
 		return virtio_dev_merge_rx(dev, queue_id, pkts, count);
 	else
 		return virtio_dev_rx(dev, queue_id, pkts, count);
@@ -1456,6 +1458,9 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	if (unlikely(vq->enabled == 0))
 		goto out_access_unlock;
 
+	if (dev->features & (1ULL << VIRTIO_F_PACKED))
+		return vhost_dequeue_burst_1_1(dev, vq, mbuf_pool, pkts, count);
+
 	vq->batch_copy_nb_elems = 0;
 
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
-- 
2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 09/14] vhost: add virtio 1.1 defines
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 09/14] vhost: add virtio 1.1 defines Jens Freimann
@ 2018-01-29 14:24   ` Yuanhan Liu
  2018-01-29 14:28     ` Jens Freimann
  0 siblings, 1 reply; 27+ messages in thread
From: Yuanhan Liu @ 2018-01-29 14:24 UTC (permalink / raw)
  To: Jens Freimann; +Cc: dev, tiwei.bie, maxime.coquelin, mst

On Mon, Jan 29, 2018 at 03:11:38PM +0100, Jens Freimann wrote:
> This should actually be in the kernel header file, but it isn't
> yet. For now let's use our own headers.
> 
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> ---
>  lib/librte_vhost/vhost.h      |  4 +++-
>  lib/librte_vhost/virtio-1.1.h | 20 ++++++++++++++++++++
>  2 files changed, 23 insertions(+), 1 deletion(-)
>  create mode 100644 lib/librte_vhost/virtio-1.1.h
> 
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index e52a9b69c..152e8bce1 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -167,7 +167,9 @@ struct vhost_msg {
>  #ifndef VIRTIO_F_VERSION_1
>   #define VIRTIO_F_VERSION_1 32
>  #endif
> -
> +#ifndef VIRTIO_F_PACKED
> + #define VIRTIO_F_PACKED 34
> +#endif
>  #define VHOST_USER_F_PROTOCOL_FEATURES	30
>  
>  /* Features supported by this builtin vhost-user net driver. */
> diff --git a/lib/librte_vhost/virtio-1.1.h b/lib/librte_vhost/virtio-1.1.h
> new file mode 100644
> index 000000000..5ca0bc33f
> --- /dev/null
> +++ b/lib/librte_vhost/virtio-1.1.h
> @@ -0,0 +1,20 @@
> +#ifndef __VIRTIO_1_1_H
> +#define __VIRTIO_1_1_H
> +
> +#define VRING_DESC_F_NEXT       1
> +#define VRING_DESC_F_WRITE      2
> +#define VRING_DESC_F_INDIRECT   4
> +
> +#define VIRTQ_DESC_F_AVAIL      7
> +#define VIRTQ_DESC_F_USED      15
> +#define DESC_USED               (1ULL << VIRTQ_DESC_F_USED)
> +#define DESC_AVAIL              (1ULL << VIRTQ_DESC_F_AVAIL)
> +
> +struct vring_desc_1_1 {

Is it how it's named in the spec? If not, I'm wondering "vring_desc_packed"
might be a better name?

If so, we could rename others, like rename "_dequeue_1_1" to "_dequeue_packed".

	--yliu

> +	uint64_t addr;
> +	uint32_t len;
> +	uint16_t index;
> +	uint16_t flags;
> +};
> +
> +#endif /* __VIRTIO_1_1_H */
> -- 
> 2.14.3

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 09/14] vhost: add virtio 1.1 defines
  2018-01-29 14:24   ` Yuanhan Liu
@ 2018-01-29 14:28     ` Jens Freimann
  0 siblings, 0 replies; 27+ messages in thread
From: Jens Freimann @ 2018-01-29 14:28 UTC (permalink / raw)
  To: Yuanhan Liu; +Cc: dev, tiwei.bie, maxime.coquelin, mst

On Mon, Jan 29, 2018 at 10:24:49PM +0800, Yuanhan Liu wrote:
>On Mon, Jan 29, 2018 at 03:11:38PM +0100, Jens Freimann wrote:
>> This should actually be in the kernel header file, but it isn't
>> yet. For now let's use our own headers.
>>
>> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
>> ---
>>  lib/librte_vhost/vhost.h      |  4 +++-
>>  lib/librte_vhost/virtio-1.1.h | 20 ++++++++++++++++++++
>>  2 files changed, 23 insertions(+), 1 deletion(-)
>>  create mode 100644 lib/librte_vhost/virtio-1.1.h
>>
>> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
>> index e52a9b69c..152e8bce1 100644
>> --- a/lib/librte_vhost/vhost.h
>> +++ b/lib/librte_vhost/vhost.h
>> @@ -167,7 +167,9 @@ struct vhost_msg {
>>  #ifndef VIRTIO_F_VERSION_1
>>   #define VIRTIO_F_VERSION_1 32
>>  #endif
>> -
>> +#ifndef VIRTIO_F_PACKED
>> + #define VIRTIO_F_PACKED 34
>> +#endif
>>  #define VHOST_USER_F_PROTOCOL_FEATURES	30
>>
>>  /* Features supported by this builtin vhost-user net driver. */
>> diff --git a/lib/librte_vhost/virtio-1.1.h b/lib/librte_vhost/virtio-1.1.h
>> new file mode 100644
>> index 000000000..5ca0bc33f
>> --- /dev/null
>> +++ b/lib/librte_vhost/virtio-1.1.h
>> @@ -0,0 +1,20 @@
>> +#ifndef __VIRTIO_1_1_H
>> +#define __VIRTIO_1_1_H
>> +
>> +#define VRING_DESC_F_NEXT       1
>> +#define VRING_DESC_F_WRITE      2
>> +#define VRING_DESC_F_INDIRECT   4
>> +
>> +#define VIRTQ_DESC_F_AVAIL      7
>> +#define VIRTQ_DESC_F_USED      15
>> +#define DESC_USED               (1ULL << VIRTQ_DESC_F_USED)
>> +#define DESC_AVAIL              (1ULL << VIRTQ_DESC_F_AVAIL)
>> +
>> +struct vring_desc_1_1 {
>
>Is it how it's named in the spec? If not, I'm wondering "vring_desc_packed"
>might be a better name?
>
>If so, we could rename others, like rename "_dequeue_1_1" to "_dequeue_packed".

I tried to name everything _packed, but obviously missed this one.
Will change it in the next version. Thanks!

regards,
Jens 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 14/14] vhost: enable packed virtqueues
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 14/14] vhost: enable packed virtqueues Jens Freimann
@ 2018-01-30 10:45   ` Jens Freimann
  0 siblings, 0 replies; 27+ messages in thread
From: Jens Freimann @ 2018-01-30 10:45 UTC (permalink / raw)
  To: Jens Freimann; +Cc: dev, tiwei bie, yliu, maxime coquelin, mst

> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index a3d4214b6..fbde54f9a 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -181,6 +181,7 @@ struct vhost_msg {
>  				(1ULL << VIRTIO_NET_F_GUEST_ANNOUNCE) | \
>  				(1ULL << VIRTIO_NET_F_MQ)      | \
>  				(1ULL << VIRTIO_F_VERSION_1)   | \
> +				(1ULL << VIRTIO_F_PACKED)      | \

Should be VIRTIO_F_PACKED_RING. Will fix it here and in other patches as well.

regards
Jens

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 10/14] vhost: vring address setup for packed queues
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 10/14] vhost: vring address setup for packed queues Jens Freimann
@ 2018-02-01  9:16   ` Maxime Coquelin
  0 siblings, 0 replies; 27+ messages in thread
From: Maxime Coquelin @ 2018-02-01  9:16 UTC (permalink / raw)
  To: Jens Freimann, dev; +Cc: tiwei.bie, yliu, mst

Hi Jens,

On 01/29/2018 03:11 PM, Jens Freimann wrote:
> From: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> 
> Add code to set up packed queues when enabled.
> 
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> Signed-off-by: Jens Freimann <jfreiman@redhat.com>
> ---
>   lib/librte_vhost/vhost.c      |  4 ++++
>   lib/librte_vhost/vhost.h      |  1 +
>   lib/librte_vhost/vhost_user.c | 17 ++++++++++++++++-
>   3 files changed, 21 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
> index 1dd9adbc7..78913912c 100644
> --- a/lib/librte_vhost/vhost.c
> +++ b/lib/librte_vhost/vhost.c
> @@ -536,6 +536,9 @@ rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable)
>   {
>   	struct virtio_net *dev = get_device(vid);
>   
> +	if (dev->features & (1ULL << VIRTIO_F_PACKED))
> +		return 0;
> +
This check should be done after dev is checked non-null.

>   	if (dev == NULL)
>   		return -1;
>   
> @@ -545,6 +548,7 @@ rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable)
>   		return -1;
>   	}
>   
> +
Trailing line.

>   	dev->virtqueue[queue_id]->used->flags = VRING_USED_F_NO_NOTIFY;
>   	return 0;
>   }

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 11/14] vhost: add helpers for packed virtqueues
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 11/14] vhost: add helpers for packed virtqueues Jens Freimann
@ 2018-02-01  9:20   ` Maxime Coquelin
  0 siblings, 0 replies; 27+ messages in thread
From: Maxime Coquelin @ 2018-02-01  9:20 UTC (permalink / raw)
  To: Jens Freimann, dev; +Cc: tiwei.bie, yliu, mst



On 01/29/2018 03:11 PM, Jens Freimann wrote:
> Add some helper functions to set/check descriptor flags
> and toggle the used wrap counter.
> 
> Signed-off-by: Jens Freimann<jfreimann@redhat.com>
> ---
>   lib/librte_vhost/virtio-1.1.h | 43 +++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 43 insertions(+)
> 
> diff --git a/lib/librte_vhost/virtio-1.1.h b/lib/librte_vhost/virtio-1.1.h
> index 5ca0bc33f..84039797e 100644
> --- a/lib/librte_vhost/virtio-1.1.h
> +++ b/lib/librte_vhost/virtio-1.1.h
> @@ -17,4 +17,47 @@ struct vring_desc_1_1 {
>   	uint16_t flags;
>   };
>   
> +static inline void
> +toggle_wrap_counter(struct vhost_virtqueue *vq)
> +{
> +	vq->used_wrap_counter ^= 1;
> +}
> +
> +static inline int
> +desc_is_avail(struct vhost_virtqueue *vq, struct vring_desc_1_1 *desc)
> +{
> +	if (!vq)
> +		return -1;

Maybe use unlikely() here?

Maxime

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 12/14] vhost: dequeue for packed queues
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 12/14] vhost: dequeue for packed queues Jens Freimann
@ 2018-02-01  9:35   ` Maxime Coquelin
  2018-02-01 10:23     ` Jens Freimann
  0 siblings, 1 reply; 27+ messages in thread
From: Maxime Coquelin @ 2018-02-01  9:35 UTC (permalink / raw)
  To: Jens Freimann, dev; +Cc: tiwei.bie, yliu, mst



On 01/29/2018 03:11 PM, Jens Freimann wrote:
> Implement code to dequeue and process descriptors from
> the vring if VIRTIO_F_PACKED is enabled.
> 
> Check if descriptor was made available by driver by looking at
> VIRTIO_F_DESC_AVAIL flag in descriptor. If so dequeue and set
> the used flag VIRTIO_F_DESC_USED to the current value of the
> used wrap counter.
> 
> Used ring wrap counter needs to be toggled when last descriptor is
> written out. This allows the host/guest to detect new descriptors even
> after the ring has wrapped.
> 
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> ---
>   lib/librte_vhost/vhost.c      |   1 +
>   lib/librte_vhost/vhost.h      |   1 +
>   lib/librte_vhost/virtio_net.c | 194 ++++++++++++++++++++++++++++++++++++++++++
>   3 files changed, 196 insertions(+)
> 
> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
> index 78913912c..e5f58d9c8 100644
> --- a/lib/librte_vhost/vhost.c
> +++ b/lib/librte_vhost/vhost.c
> @@ -191,6 +191,7 @@ init_vring_queue(struct virtio_net *dev, uint32_t vring_idx)
>   
>   	vq->kickfd = VIRTIO_UNINITIALIZED_EVENTFD;
>   	vq->callfd = VIRTIO_UNINITIALIZED_EVENTFD;
> +	vq->used_wrap_counter = 1;
>   
>   	vhost_user_iotlb_init(dev, vring_idx);
>   	/* Backends are set to -1 indicating an inactive device. */
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index 8554d51d8..a3d4214b6 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -106,6 +106,7 @@ struct vhost_virtqueue {
>   
>   	struct batch_copy_elem	*batch_copy_elems;
>   	uint16_t		batch_copy_nb_elems;
> +	uint32_t		used_wrap_counter;
>   
>   	rte_rwlock_t	iotlb_lock;
>   	rte_rwlock_t	iotlb_pending_lock;
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index edfab3ba6..5d4cfe8cc 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -19,6 +19,7 @@
>   
>   #include "iotlb.h"
>   #include "vhost.h"
> +#include "virtio-1.1.h"
>   
>   #define MAX_PKT_BURST 32
>   
> @@ -1111,6 +1112,199 @@ restore_mbuf(struct rte_mbuf *m)
>   	}
>   }
>   
> +static inline uint16_t
> +dequeue_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
> +	     struct rte_mempool *mbuf_pool, struct rte_mbuf *m,
> +	     struct vring_desc_1_1 *descs)
> +{
> +	struct vring_desc_1_1 *desc;
> +	uint64_t desc_addr;
> +	uint32_t desc_avail, desc_offset;
> +	uint32_t mbuf_avail, mbuf_offset;
> +	uint32_t cpy_len;
> +	struct rte_mbuf *cur = m, *prev = m;
> +	struct virtio_net_hdr *hdr = NULL;
> +	uint16_t head_idx = vq->last_used_idx & (vq->size - 1);
> +	int wrap_counter = vq->used_wrap_counter;
> +
> +	desc = &descs[vq->last_used_idx & (vq->size - 1)];
> +	if (unlikely((desc->len < dev->vhost_hlen)) ||
> +			(desc->flags & VRING_DESC_F_INDIRECT))
> +		rte_panic("INDIRECT not supported yet");

Using rte_panic() may not be a good idea here, because a malicious guest
could make the vswitch to crash easily.

> +
> +	desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);

You should use vhost_iova_to_vva() here and everywhere else, otherwise
you break IOMMU support.

> +	if (unlikely(!desc_addr))
> +		return -1;
> +
> +	if (virtio_net_with_host_offload(dev)) {
> +		hdr = (struct virtio_net_hdr *)((uintptr_t)desc_addr);
> +		rte_prefetch0(hdr);
> +	}
> +
> +	/*
> +	 * A virtio driver normally uses at least 2 desc buffers
> +	 * for Tx: the first for storing the header, and others
> +	 * for storing the data.
> +	 */
> +	if (likely((desc->len == dev->vhost_hlen) &&
> +		   (desc->flags & VRING_DESC_F_NEXT) != 0)) {
> +		if ((++vq->last_used_idx & (vq->size - 1)) == 0)
> +			toggle_wrap_counter(vq);
> +
> +		desc = &descs[vq->last_used_idx & (vq->size - 1)];
> +
> +		if (unlikely(desc->flags & VRING_DESC_F_INDIRECT))
> +			rte_panic("INDIRECT not supported yet");
Ditto.

> +
> +		desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
> +		if (unlikely(!desc_addr))
> +			return -1;
> +
> +		desc_offset = 0;
> +		desc_avail  = desc->len;
> +	} else {
> +		desc_avail  = desc->len - dev->vhost_hlen;
> +		desc_offset = dev->vhost_hlen;
> +	}
> +
> +	rte_prefetch0((void *)(uintptr_t)(desc_addr + desc_offset));
> +
> +	PRINT_PACKET(dev, (uintptr_t)(desc_addr + desc_offset), desc_avail, 0);
> +
> +	mbuf_offset = 0;
> +	mbuf_avail  = m->buf_len - RTE_PKTMBUF_HEADROOM;
> +	while (1) {
> +		uint64_t hpa;
> +
> +		cpy_len = RTE_MIN(desc_avail, mbuf_avail);
> +
> +		/*
> +		 * A desc buf might across two host physical pages that are
> +		 * not continuous. In such case (gpa_to_hpa returns 0), data
> +		 * will be copied even though zero copy is enabled.
> +		 */
> +		if (unlikely(dev->dequeue_zero_copy && (hpa = gpa_to_hpa(dev,
> +					desc->addr + desc_offset, cpy_len)))) {
> +			cur->data_len = cpy_len;
> +			cur->data_off = 0;
> +			cur->buf_addr = (void *)(uintptr_t)desc_addr;
> +			cur->buf_physaddr = hpa;
> +
> +			/*
> +			 * In zero copy mode, one mbuf can only reference data
> +			 * for one or partial of one desc buff.
> +			 */
> +			mbuf_avail = cpy_len;
> +		} else {
> +			rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *,
> +							   mbuf_offset),
> +				(void *)((uintptr_t)(desc_addr + desc_offset)),
> +				cpy_len);
> +		}
> +
> +		mbuf_avail  -= cpy_len;
> +		mbuf_offset += cpy_len;
> +		desc_avail  -= cpy_len;
> +		desc_offset += cpy_len;
> +
> +		/* This desc reaches to its end, get the next one */
> +		if (desc_avail == 0) {
> +			if ((desc->flags & VRING_DESC_F_NEXT) == 0)
> +				break;
> +
> +			if ((++vq->last_used_idx & (vq->size - 1)) == 0)
> +				toggle_wrap_counter(vq);
> +
> +			desc = &descs[vq->last_used_idx & (vq->size - 1)];
> +			if (unlikely(desc->flags & VRING_DESC_F_INDIRECT))
> +				rte_panic("INDIRECT not supported yet");
> +
> +			desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
> +			if (unlikely(!desc_addr))
> +				return -1;
> +
> +			rte_prefetch0((void *)(uintptr_t)desc_addr);
> +
> +			desc_offset = 0;
> +			desc_avail  = desc->len;
> +
> +			PRINT_PACKET(dev, (uintptr_t)desc_addr, desc->len, 0);
> +		}
> +
> +		/*
> +		 * This mbuf reaches to its end, get a new one
> +		 * to hold more data.
> +		 */
> +		if (mbuf_avail == 0) {
> +			cur = rte_pktmbuf_alloc(mbuf_pool);
> +			if (unlikely(cur == NULL)) {
> +				RTE_LOG(ERR, VHOST_DATA, "Failed to "
> +					"allocate memory for mbuf.\n");
> +				return -1;
> +			}
> +
> +			prev->next = cur;
> +			prev->data_len = mbuf_offset;
> +			m->nb_segs += 1;
> +			m->pkt_len += mbuf_offset;
> +			prev = cur;
> +
> +			mbuf_offset = 0;
> +			mbuf_avail  = cur->buf_len - RTE_PKTMBUF_HEADROOM;
> +		}
> +	}
> +
> +	if (hdr)
> +		vhost_dequeue_offload(hdr, m);
> +
> +	if ((++vq->last_used_idx & (vq->size - 1)) == 0)
> +		toggle_wrap_counter(vq);
> +
> +	rte_smp_wmb();
> +	_set_desc_used(&descs[head_idx], wrap_counter);
> +
> +	prev->data_len = mbuf_offset;
> +	m->pkt_len    += mbuf_offset;
> +
> +	return 0;
> +}
> +
> +static inline uint16_t
> +vhost_dequeue_burst_1_1(struct virtio_net *dev, struct vhost_virtqueue *vq,
> +			struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts,
> +			uint16_t count)
> +{
> +	uint16_t i;
> +	uint16_t idx;
> +	struct vring_desc_1_1 *desc = vq->desc_1_1;
> +	int err;
> +
> +	count = RTE_MIN(MAX_PKT_BURST, count);
> +	for (i = 0; i < count; i++) {
> +		idx = vq->last_used_idx & (vq->size - 1);
> +		if (!desc_is_avail(vq, &desc[idx]))
> +			break;
> +		rte_smp_rmb();
> +
> +		pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
> +		if (unlikely(pkts[i] == NULL)) {
> +			RTE_LOG(ERR, VHOST_DATA,
> +				"Failed to allocate memory for mbuf.\n");
> +			break;
> +		}
> +
> +		err = dequeue_desc(dev, vq, mbuf_pool, pkts[i], desc);
> +		if (unlikely(err)) {
> +			rte_pktmbuf_free(pkts[i]);
> +			break;
> +		}
> +	}
> +
> +	rte_spinlock_unlock(&vq->access_lock);

Where is it locked? It looks unbalanced.

> +
> +	return i;
> +}
> +
>   uint16_t
>   rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
>   	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 12/14] vhost: dequeue for packed queues
  2018-02-01  9:35   ` Maxime Coquelin
@ 2018-02-01 10:23     ` Jens Freimann
  0 siblings, 0 replies; 27+ messages in thread
From: Jens Freimann @ 2018-02-01 10:23 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: dev, tiwei.bie, yliu, mst

On Thu, Feb 01, 2018 at 10:35:18AM +0100, Maxime Coquelin wrote:
>
>
>On 01/29/2018 03:11 PM, Jens Freimann wrote:
>>Implement code to dequeue and process descriptors from
>>the vring if VIRTIO_F_PACKED is enabled.
>>
>>Check if descriptor was made available by driver by looking at
>>VIRTIO_F_DESC_AVAIL flag in descriptor. If so dequeue and set
>>the used flag VIRTIO_F_DESC_USED to the current value of the
>>used wrap counter.
>>
>>Used ring wrap counter needs to be toggled when last descriptor is
>>written out. This allows the host/guest to detect new descriptors even
>>after the ring has wrapped.
>>
>>Signed-off-by: Jens Freimann <jfreimann@redhat.com>
>>---
>>  lib/librte_vhost/vhost.c      |   1 +
>>  lib/librte_vhost/vhost.h      |   1 +
>>  lib/librte_vhost/virtio_net.c | 194 ++++++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 196 insertions(+)
>>
>>diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
>>index 78913912c..e5f58d9c8 100644
>>--- a/lib/librte_vhost/vhost.c
>>+++ b/lib/librte_vhost/vhost.c
>>@@ -191,6 +191,7 @@ init_vring_queue(struct virtio_net *dev, uint32_t vring_idx)
>>  	vq->kickfd = VIRTIO_UNINITIALIZED_EVENTFD;
>>  	vq->callfd = VIRTIO_UNINITIALIZED_EVENTFD;
>>+	vq->used_wrap_counter = 1;
>>  	vhost_user_iotlb_init(dev, vring_idx);
>>  	/* Backends are set to -1 indicating an inactive device. */
>>diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
>>index 8554d51d8..a3d4214b6 100644
>>--- a/lib/librte_vhost/vhost.h
>>+++ b/lib/librte_vhost/vhost.h
>>@@ -106,6 +106,7 @@ struct vhost_virtqueue {
>>  	struct batch_copy_elem	*batch_copy_elems;
>>  	uint16_t		batch_copy_nb_elems;
>>+	uint32_t		used_wrap_counter;
>>  	rte_rwlock_t	iotlb_lock;
>>  	rte_rwlock_t	iotlb_pending_lock;
>>diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
>>index edfab3ba6..5d4cfe8cc 100644
>>--- a/lib/librte_vhost/virtio_net.c
>>+++ b/lib/librte_vhost/virtio_net.c
>>@@ -19,6 +19,7 @@
>>  #include "iotlb.h"
>>  #include "vhost.h"
>>+#include "virtio-1.1.h"
>>  #define MAX_PKT_BURST 32
>>@@ -1111,6 +1112,199 @@ restore_mbuf(struct rte_mbuf *m)
>>  	}
>>  }
>>+static inline uint16_t
>>+dequeue_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>+	     struct rte_mempool *mbuf_pool, struct rte_mbuf *m,
>>+	     struct vring_desc_1_1 *descs)
>>+{
>>+	struct vring_desc_1_1 *desc;
>>+	uint64_t desc_addr;
>>+	uint32_t desc_avail, desc_offset;
>>+	uint32_t mbuf_avail, mbuf_offset;
>>+	uint32_t cpy_len;
>>+	struct rte_mbuf *cur = m, *prev = m;
>>+	struct virtio_net_hdr *hdr = NULL;
>>+	uint16_t head_idx = vq->last_used_idx & (vq->size - 1);
>>+	int wrap_counter = vq->used_wrap_counter;
>>+
>>+	desc = &descs[vq->last_used_idx & (vq->size - 1)];
>>+	if (unlikely((desc->len < dev->vhost_hlen)) ||
>>+			(desc->flags & VRING_DESC_F_INDIRECT))
>>+		rte_panic("INDIRECT not supported yet");
>
>Using rte_panic() may not be a good idea here, because a malicious guest
>could make the vswitch to crash easily.

Good point. It was for debugging only, I will remove it. 

>>+
>>+	desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
>
>You should use vhost_iova_to_vva() here and everywhere else, otherwise
>you break IOMMU support.

Yes, I'll change it. 

>>+	if (unlikely(!desc_addr))
>>+		return -1;
>>+
>>+	if (virtio_net_with_host_offload(dev)) {
>>+		hdr = (struct virtio_net_hdr *)((uintptr_t)desc_addr);
>>+		rte_prefetch0(hdr);
>>+	}
>>+
>>+	/*
>>+	 * A virtio driver normally uses at least 2 desc buffers
>>+	 * for Tx: the first for storing the header, and others
>>+	 * for storing the data.
>>+	 */
>>+	if (likely((desc->len == dev->vhost_hlen) &&
>>+		   (desc->flags & VRING_DESC_F_NEXT) != 0)) {
>>+		if ((++vq->last_used_idx & (vq->size - 1)) == 0)
>>+			toggle_wrap_counter(vq);
>>+
>>+		desc = &descs[vq->last_used_idx & (vq->size - 1)];
>>+
>>+		if (unlikely(desc->flags & VRING_DESC_F_INDIRECT))
>>+			rte_panic("INDIRECT not supported yet");
>Ditto.
>
>>+
>>+		desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
>>+		if (unlikely(!desc_addr))
>>+			return -1;
>>+
>>+		desc_offset = 0;
>>+		desc_avail  = desc->len;
>>+	} else {
>>+		desc_avail  = desc->len - dev->vhost_hlen;
>>+		desc_offset = dev->vhost_hlen;
>>+	}
>>+
>>+	rte_prefetch0((void *)(uintptr_t)(desc_addr + desc_offset));
>>+
>>+	PRINT_PACKET(dev, (uintptr_t)(desc_addr + desc_offset), desc_avail, 0);
>>+
>>+	mbuf_offset = 0;
>>+	mbuf_avail  = m->buf_len - RTE_PKTMBUF_HEADROOM;
>>+	while (1) {
>>+		uint64_t hpa;
>>+
>>+		cpy_len = RTE_MIN(desc_avail, mbuf_avail);
>>+
>>+		/*
>>+		 * A desc buf might across two host physical pages that are
>>+		 * not continuous. In such case (gpa_to_hpa returns 0), data
>>+		 * will be copied even though zero copy is enabled.
>>+		 */
>>+		if (unlikely(dev->dequeue_zero_copy && (hpa = gpa_to_hpa(dev,
>>+					desc->addr + desc_offset, cpy_len)))) {
>>+			cur->data_len = cpy_len;
>>+			cur->data_off = 0;
>>+			cur->buf_addr = (void *)(uintptr_t)desc_addr;
>>+			cur->buf_physaddr = hpa;
>>+
>>+			/*
>>+			 * In zero copy mode, one mbuf can only reference data
>>+			 * for one or partial of one desc buff.
>>+			 */
>>+			mbuf_avail = cpy_len;
>>+		} else {
>>+			rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *,
>>+							   mbuf_offset),
>>+				(void *)((uintptr_t)(desc_addr + desc_offset)),
>>+				cpy_len);
>>+		}
>>+
>>+		mbuf_avail  -= cpy_len;
>>+		mbuf_offset += cpy_len;
>>+		desc_avail  -= cpy_len;
>>+		desc_offset += cpy_len;
>>+
>>+		/* This desc reaches to its end, get the next one */
>>+		if (desc_avail == 0) {
>>+			if ((desc->flags & VRING_DESC_F_NEXT) == 0)
>>+				break;
>>+
>>+			if ((++vq->last_used_idx & (vq->size - 1)) == 0)
>>+				toggle_wrap_counter(vq);
>>+
>>+			desc = &descs[vq->last_used_idx & (vq->size - 1)];
>>+			if (unlikely(desc->flags & VRING_DESC_F_INDIRECT))
>>+				rte_panic("INDIRECT not supported yet");
>>+
>>+			desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr);
>>+			if (unlikely(!desc_addr))
>>+				return -1;
>>+
>>+			rte_prefetch0((void *)(uintptr_t)desc_addr);
>>+
>>+			desc_offset = 0;
>>+			desc_avail  = desc->len;
>>+
>>+			PRINT_PACKET(dev, (uintptr_t)desc_addr, desc->len, 0);
>>+		}
>>+
>>+		/*
>>+		 * This mbuf reaches to its end, get a new one
>>+		 * to hold more data.
>>+		 */
>>+		if (mbuf_avail == 0) {
>>+			cur = rte_pktmbuf_alloc(mbuf_pool);
>>+			if (unlikely(cur == NULL)) {
>>+				RTE_LOG(ERR, VHOST_DATA, "Failed to "
>>+					"allocate memory for mbuf.\n");
>>+				return -1;
>>+			}
>>+
>>+			prev->next = cur;
>>+			prev->data_len = mbuf_offset;
>>+			m->nb_segs += 1;
>>+			m->pkt_len += mbuf_offset;
>>+			prev = cur;
>>+
>>+			mbuf_offset = 0;
>>+			mbuf_avail  = cur->buf_len - RTE_PKTMBUF_HEADROOM;
>>+		}
>>+	}
>>+
>>+	if (hdr)
>>+		vhost_dequeue_offload(hdr, m);
>>+
>>+	if ((++vq->last_used_idx & (vq->size - 1)) == 0)
>>+		toggle_wrap_counter(vq);
>>+
>>+	rte_smp_wmb();
>>+	_set_desc_used(&descs[head_idx], wrap_counter);
>>+
>>+	prev->data_len = mbuf_offset;
>>+	m->pkt_len    += mbuf_offset;
>>+
>>+	return 0;
>>+}
>>+
>>+static inline uint16_t
>>+vhost_dequeue_burst_1_1(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>+			struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts,
>>+			uint16_t count)
>>+{
>>+	uint16_t i;
>>+	uint16_t idx;
>>+	struct vring_desc_1_1 *desc = vq->desc_1_1;
>>+	int err;
>>+
>>+	count = RTE_MIN(MAX_PKT_BURST, count);
>>+	for (i = 0; i < count; i++) {
>>+		idx = vq->last_used_idx & (vq->size - 1);
>>+		if (!desc_is_avail(vq, &desc[idx]))
>>+			break;
>>+		rte_smp_rmb();
>>+
>>+		pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
>>+		if (unlikely(pkts[i] == NULL)) {
>>+			RTE_LOG(ERR, VHOST_DATA,
>>+				"Failed to allocate memory for mbuf.\n");
>>+			break;
>>+		}
>>+
>>+		err = dequeue_desc(dev, vq, mbuf_pool, pkts[i], desc);
>>+		if (unlikely(err)) {
>>+			rte_pktmbuf_free(pkts[i]);
>>+			break;
>>+		}
>>+	}
>>+
>>+	rte_spinlock_unlock(&vq->access_lock);
>
>Where is it locked? It looks unbalanced.

It is locked in the caller rte_vhost_dequeue_burst() and we return
immediately. But I could change it to lock and unlock right here.

Thanks for the review!

regards,
Jens 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 00/14] implement packed virtqueues
  2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
                   ` (13 preceding siblings ...)
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 14/14] vhost: enable packed virtqueues Jens Freimann
@ 2018-02-06  6:20 ` Jason Wang
  2018-02-06 13:58   ` Jens Freimann
  14 siblings, 1 reply; 27+ messages in thread
From: Jason Wang @ 2018-02-06  6:20 UTC (permalink / raw)
  To: Jens Freimann, dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst, Wei Xu



On 2018年01月29日 22:11, Jens Freimann wrote:
> This is a basic implementation of packed virtqueues as specified in the
> Virtio 1.1 draft. A compiled version of the current draft is available
> at https://github.com/oasis-tcs/virtio-docs.git (or as .pdf at
> https://github.com/oasis-tcs/virtio-docs/blob/master/virtio-v1.1-packed-wd07.pdf
>
> It does not implement yet indirect descriptors and checksum offloading.
> VIRTIO_F_IN_ORER is not implemented, as well as support for mergeable buffers
> with packed queues. Patches for this will follow soon.
>
> A packed virtqueue is different from a split virtqueue in that it
> consists of only a single descriptor ring that replaces available and
> used ring, index and descriptor buffer.
>
> Each descriptor is readable and writable and has a flags field. These flags
> will mark if a descriptor is available or used.  To detect new available descriptors
> even after the ring has wrapped, device and driver each have a
> single-bit wrap counter that is flipped from 0 to 1 and vice versa every time
> the last descriptor in the ring is used/made available.
>
> The idea behind this is to 1. improve performance by avoiding cache misses
> and 2. be easier for devices to implement.
>
> Regarding performance: with these patches I get 21.13 Mpps on my system
> as compared to 18.8 Mpps with the virtio 1.0 code. Packet size was 64
> bytes, 0.05% acceptable loss.  Test setup is described as in
> http://dpdk.org/doc/guides/howto/pvp_reference_benchmark.html
>
> Packet generator:
> MoonGen
> Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz
> Intel X710 NIC
> RHEL 7.4
>
> Device under test:
> Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
> Intel X710 NIC
> RHEL 7.4
>
> VM on DuT: RHEL7.4
>
> I plan to do more performance test with bigger frame sizes.
>
> This patch series is based on a prototype implemented by Yuanhan Liu and
> Tiwei Bie.
>
>

Hi Jens:

May I ask how do you test the patch? I believe you need some basic 
packed ring support in both qemu and vhost-user protocol.

Thanks

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 00/14] implement packed virtqueues
  2018-02-06  6:20 ` [dpdk-dev] [PATCH 00/14] implement " Jason Wang
@ 2018-02-06 13:58   ` Jens Freimann
  2018-02-07  8:37     ` Jason Wang
  0 siblings, 1 reply; 27+ messages in thread
From: Jens Freimann @ 2018-02-06 13:58 UTC (permalink / raw)
  To: Jason Wang; +Cc: dev, tiwei.bie, yliu, maxime.coquelin, mst, Wei Xu

On Tue, Feb 06, 2018 at 02:20:02PM +0800, Jason Wang wrote:
>
>
>On 2018???01???29??? 22:11, Jens Freimann wrote:
>May I ask how do you test the patch? I believe you need some basic 
>packed ring support in both qemu and vhost-user protocol.

Yes, I have a small patch for qemu here:
 https://github.com/jensfr/qemu/commit/71b4013e3a183b1d5ab6d8ad2bb4829951425d33

Also you need another patch for dpdk to disable ctrl virtqueue
(a workaround) which is on my dpdk branch:

 https://github.com/jensfr/dpdk/commit/305f4b2453448402c1994caef8d8e01923660ff3

I will add them to v2 when I send it out.

regards,
 Jens 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 00/14] implement packed virtqueues
  2018-02-06 13:58   ` Jens Freimann
@ 2018-02-07  8:37     ` Jason Wang
  0 siblings, 0 replies; 27+ messages in thread
From: Jason Wang @ 2018-02-07  8:37 UTC (permalink / raw)
  To: Jens Freimann; +Cc: dev, tiwei.bie, yliu, maxime.coquelin, mst, Wei Xu



On 2018年02月06日 21:58, Jens Freimann wrote:
> On Tue, Feb 06, 2018 at 02:20:02PM +0800, Jason Wang wrote:
>>
>>
>> On 2018???01???29??? 22:11, Jens Freimann wrote:
>> May I ask how do you test the patch? I believe you need some basic 
>> packed ring support in both qemu and vhost-user protocol.
>
> Yes, I have a small patch for qemu here:
> https://github.com/jensfr/qemu/commit/71b4013e3a183b1d5ab6d8ad2bb4829951425d33 
>
>
> Also you need another patch for dpdk to disable ctrl virtqueue
> (a workaround) which is on my dpdk branch:
>
> https://github.com/jensfr/dpdk/commit/305f4b2453448402c1994caef8d8e01923660ff3 
>
>
> I will add them to v2 when I send it out.
>
> regards,
> Jens 

I see.

Thanks

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 07/14] net/virtio: implement transmit path for packed queues
  2018-01-29 14:11 ` [dpdk-dev] [PATCH 07/14] net/virtio: implement transmit path for " Jens Freimann
@ 2018-02-12 13:16   ` Jason Wang
  2018-02-13  9:10     ` Jens Freimann
  0 siblings, 1 reply; 27+ messages in thread
From: Jason Wang @ 2018-02-12 13:16 UTC (permalink / raw)
  To: Jens Freimann, dev; +Cc: tiwei.bie, yliu, maxime.coquelin, mst



On 2018年01月29日 22:11, Jens Freimann wrote:
> +/* Cleanup from completed transmits. */
> +static void
> +virtio_xmit_cleanup(struct virtqueue *vq)
> +{
> +	uint16_t idx;
> +	uint16_t size = vq->vq_nentries;
> +	struct vring_desc_1_1 *desc = vq->vq_ring.desc_1_1;
> +
> +	idx = vq->vq_used_cons_idx & (size - 1);
> +	while (desc_is_used(&desc[idx]) &&
> +	       vq->vq_free_cnt < size) {
> +		while (desc[idx].flags & VRING_DESC_F_NEXT) {
> +			vq->vq_free_cnt++;
> +			idx = ++vq->vq_used_cons_idx & (size - 1);
> +		}
> +		vq->vq_free_cnt++;
> +		idx = ++vq->vq_used_cons_idx & (size - 1);
> +	}
> +}

This looks like a violation of the spec. In 2.6.6 Next Flag: Descriptor 
Chaining. It said:

"VIRTQ_DESC_F_NEXT is reserved in used descriptors, and should be 
ignored by drivers."

(Looking at the device implementation, it was in fact an in order device 
which is said to be no in the cover).

Thanks

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [dpdk-dev] [PATCH 07/14] net/virtio: implement transmit path for packed queues
  2018-02-12 13:16   ` Jason Wang
@ 2018-02-13  9:10     ` Jens Freimann
  0 siblings, 0 replies; 27+ messages in thread
From: Jens Freimann @ 2018-02-13  9:10 UTC (permalink / raw)
  To: Jason Wang; +Cc: dev, tiwei.bie, yliu, maxime.coquelin, mst

On Mon, Feb 12, 2018 at 09:16:27PM +0800, Jason Wang wrote:
>
>
>On 2018???01???29??? 22:11, Jens Freimann wrote:
>>+/* Cleanup from completed transmits. */
>>+static void
>>+virtio_xmit_cleanup(struct virtqueue *vq)
>>+{
>>+	uint16_t idx;
>>+	uint16_t size = vq->vq_nentries;
>>+	struct vring_desc_1_1 *desc = vq->vq_ring.desc_1_1;
>>+
>>+	idx = vq->vq_used_cons_idx & (size - 1);
>>+	while (desc_is_used(&desc[idx]) &&
>>+	       vq->vq_free_cnt < size) {
>>+		while (desc[idx].flags & VRING_DESC_F_NEXT) {
>>+			vq->vq_free_cnt++;
>>+			idx = ++vq->vq_used_cons_idx & (size - 1);
>>+		}
>>+		vq->vq_free_cnt++;
>>+		idx = ++vq->vq_used_cons_idx & (size - 1);
>>+	}
>>+}
>
>This looks like a violation of the spec. In 2.6.6 Next Flag: 
>Descriptor Chaining. It said:
>
>"VIRTQ_DESC_F_NEXT is reserved in used descriptors, and should be 
>ignored by drivers."

I think you are right. I will rework this bit.
>
>(Looking at the device implementation, it was in fact an in order 
>device which is said to be no in the cover).

Looking into this as well.

Thanks for the review!

regards,
Jens 

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2018-02-13  9:10 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-29 14:11 [dpdk-dev] [PATCH 00/14] implement packed virtqueues Jens Freimann
2018-01-29 14:11 ` [dpdk-dev] [PATCH 01/14] net/virtio: vring init for packed queues Jens Freimann
2018-01-29 14:11 ` [dpdk-dev] [PATCH 02/14] net/virtio: don't call virtio_disable_intr() " Jens Freimann
2018-01-29 14:11 ` [dpdk-dev] [PATCH 03/14] net/virtio: add virtio 1.1 defines Jens Freimann
2018-01-29 14:11 ` [dpdk-dev] [PATCH 04/14] net/virtio: add packed virtqueue helpers Jens Freimann
2018-01-29 14:11 ` [dpdk-dev] [PATCH 05/14] net/virtio: don't dump split virtqueue data Jens Freimann
2018-01-29 14:11 ` [dpdk-dev] [PATCH 06/14] net/virtio-user: add option to use packed queues Jens Freimann
2018-01-29 14:11 ` [dpdk-dev] [PATCH 07/14] net/virtio: implement transmit path for " Jens Freimann
2018-02-12 13:16   ` Jason Wang
2018-02-13  9:10     ` Jens Freimann
2018-01-29 14:11 ` [dpdk-dev] [PATCH 08/14] net/virtio: implement receive " Jens Freimann
2018-01-29 14:11 ` [dpdk-dev] [PATCH 09/14] vhost: add virtio 1.1 defines Jens Freimann
2018-01-29 14:24   ` Yuanhan Liu
2018-01-29 14:28     ` Jens Freimann
2018-01-29 14:11 ` [dpdk-dev] [PATCH 10/14] vhost: vring address setup for packed queues Jens Freimann
2018-02-01  9:16   ` Maxime Coquelin
2018-01-29 14:11 ` [dpdk-dev] [PATCH 11/14] vhost: add helpers for packed virtqueues Jens Freimann
2018-02-01  9:20   ` Maxime Coquelin
2018-01-29 14:11 ` [dpdk-dev] [PATCH 12/14] vhost: dequeue for packed queues Jens Freimann
2018-02-01  9:35   ` Maxime Coquelin
2018-02-01 10:23     ` Jens Freimann
2018-01-29 14:11 ` [dpdk-dev] [PATCH 13/14] vhost: packed queue enqueue path Jens Freimann
2018-01-29 14:11 ` [dpdk-dev] [PATCH 14/14] vhost: enable packed virtqueues Jens Freimann
2018-01-30 10:45   ` Jens Freimann
2018-02-06  6:20 ` [dpdk-dev] [PATCH 00/14] implement " Jason Wang
2018-02-06 13:58   ` Jens Freimann
2018-02-07  8:37     ` Jason Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).