* [dpdk-dev] [PATCH v13 00/10] implement packed virtqueues
@ 2018-12-14 15:59 Jens Freimann
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 01/10] net/virtio: add packed virtqueue defines Jens Freimann
` (9 more replies)
0 siblings, 10 replies; 22+ messages in thread
From: Jens Freimann @ 2018-12-14 15:59 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
This is a basic implementation of packed virtqueues as specified in the
Virtio 1.1 draft. A compiled version of the current draft is available
at https://github.com/oasis-tcs/virtio-docs.git (or as .pdf at
https://github.com/oasis-tcs/virtio-docs/blob/master/virtio-v1.1-packed-wd10.pdf
A packed virtqueue is different from a split virtqueue in that it
consists of only a single descriptor ring that replaces available and
used ring, index and descriptor pointers.
Each descriptor is readable and writable and has a flags field. These flags
will mark if a descriptor is available or used. To detect new available descriptors
even after the ring has wrapped, device and driver each have a
single-bit wrap counter that is flipped from 0 to 1 and vice versa every time
the last descriptor in the ring is used/made available.
With this patch set I see performance equal or slightly better (+2-3%) in a PVP
scenario compared to split virtqueues (v18.11 in host)
regards,
Jens
v12->v13:
* re-order patches 1-3 to address Maximes comment. Move all defines and
data structures to patch one, all helpers to patch 2.
* build-tested and ran checkpatch on all patches again
* add split/packed versions for virtio_enable_intr()
* remove redundant changes from split vq code
* just return -1 when cq is enabled for packed vqs in virtio-user
v11->v12:
* add a patch to disable control vq when packed vq is enabled.
I have a patch to support this but it needs a bit more work
and I think it shouldn't stop this series from being applied
* rework mergeable receive buffer code to be more efficient, by
batching descriptor refill, similar to what Maxime proposed for
split virtqueues
* removed unnecessary checks in virtio_recv_mergeable_pkts_packed
(Maxime)
* Did not merge receive functions as Maxime suggested because it seemed
to cause a small performance regression
* Move event_flags_shadow from patch 3 to 1 (Maxime)
* Did not merge xmit functions and call _split/_packed functions from
there because it seemed to cause small performance drop (-0.5%)
(Maxime)
v10-v11:
* this version includes some fixes from Tiwei, so I added his
Signed-off-by to some of the patches
* fix hang with mergable rx buffers (Tiwei)
* clean-up code and simplify buffer handling (Tiwei)
* rebase to current virtio-next master branch
v9-v10:
* don't mix index into buffer list and descriptors
* whitespace and formatting issues
* remove "VQ:" in dump virtqueue patch
* add extra packed vring struct to virtqueue and change function
prototypes and code accordingly
* move wrap_counters to virtqueue
* make if-conditions for packed and !packed more clear in
set_rxtx_funcs()
* initialize wrap counters in first patch, instead of rx and tx
implementation patch
* make virtio-user not supported with packed virtqueues, to
be fixed in other patch set?
v8-v9:
* fix virtio_ring_free_chain_packed() to handle descriptors
correctly in case of out-of-order
* fix check in virtqueue_xmit_cleanup_packed() to improve performance
v7->v8:
* move desc_is_used change to correct patch
* remove trailing newline
* correct xmit code, flags update and memory barrier
* move packed desc init to dedicated function, split
and packed variant
Jens Freimann (9):
net/virtio: add packed virtqueue defines
net/virtio: add packed virtqueue helpers
net/virtio: vring init for packed queues
net/virtio: dump packed virtqueue data
net/virtio: implement transmit path for packed queues
net/virtio: implement receive path for packed queues
net/virtio: add virtio send command packed queue support
net/virtio-user: fail if q used with packed vq
net/virtio: enable packed virtqueues by default
Yuanhan Liu (1):
net/virtio-user: add option to use packed queues
drivers/net/virtio/virtio_ethdev.c | 221 +++++--
drivers/net/virtio/virtio_ethdev.h | 8 +
drivers/net/virtio/virtio_pci.h | 7 +
drivers/net/virtio/virtio_ring.h | 64 +-
drivers/net/virtio/virtio_rxtx.c | 611 +++++++++++++++++-
.../net/virtio/virtio_user/virtio_user_dev.c | 27 +-
.../net/virtio/virtio_user/virtio_user_dev.h | 2 +-
drivers/net/virtio/virtio_user_ethdev.c | 14 +-
drivers/net/virtio/virtqueue.c | 43 +-
drivers/net/virtio/virtqueue.h | 128 +++-
10 files changed, 1059 insertions(+), 66 deletions(-)
--
2.17.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v13 01/10] net/virtio: add packed virtqueue defines
2018-12-14 15:59 [dpdk-dev] [PATCH v13 00/10] implement packed virtqueues Jens Freimann
@ 2018-12-14 15:59 ` Jens Freimann
2018-12-17 15:45 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 02/10] net/virtio: add packed virtqueue helpers Jens Freimann
` (8 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Jens Freimann @ 2018-12-14 15:59 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
drivers/net/virtio/virtio_pci.h | 1 +
drivers/net/virtio/virtio_ring.h | 30 ++++++++++++++++++++++++++++++
drivers/net/virtio/virtqueue.h | 6 ++++++
3 files changed, 37 insertions(+)
diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
index e961a58ca..4c975a531 100644
--- a/drivers/net/virtio/virtio_pci.h
+++ b/drivers/net/virtio/virtio_pci.h
@@ -113,6 +113,7 @@ struct virtnet_ctl;
#define VIRTIO_F_VERSION_1 32
#define VIRTIO_F_IOMMU_PLATFORM 33
+#define VIRTIO_F_RING_PACKED 34
/*
* Some VirtIO feature bits (currently bits 28 through 31) are
diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index 9e3c2a015..464449074 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -15,6 +15,10 @@
#define VRING_DESC_F_WRITE 2
/* This means the buffer contains a list of buffer descriptors. */
#define VRING_DESC_F_INDIRECT 4
+/* This flag means the descriptor was made available by the driver */
+#define VRING_DESC_F_AVAIL(b) ((uint16_t)(b) << 7)
+/* This flag means the descriptor was used by the device */
+#define VRING_DESC_F_USED(b) ((uint16_t)(b) << 15)
/* The Host uses this in used->flags to advise the Guest: don't kick me
* when you add a buffer. It's unreliable, so it's simply an
@@ -54,6 +58,32 @@ struct vring_used {
struct vring_used_elem ring[0];
};
+/* For support of packed virtqueues in Virtio 1.1 the format of descriptors
+ * looks like this.
+ */
+struct vring_packed_desc {
+ uint64_t addr;
+ uint32_t len;
+ uint16_t id;
+ uint16_t flags;
+};
+
+#define RING_EVENT_FLAGS_ENABLE 0x0
+#define RING_EVENT_FLAGS_DISABLE 0x1
+#define RING_EVENT_FLAGS_DESC 0x2
+struct vring_packed_desc_event {
+ uint16_t desc_event_off_wrap;
+ uint16_t desc_event_flags;
+};
+
+struct vring_packed {
+ unsigned int num;
+ struct vring_packed_desc *desc_packed;
+ struct vring_packed_desc_event *driver_event;
+ struct vring_packed_desc_event *device_event;
+
+};
+
struct vring {
unsigned int num;
struct vring_desc *desc;
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 26518ed98..d4e0858e4 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -161,11 +161,17 @@ struct virtio_pmd_ctrl {
struct vq_desc_extra {
void *cookie;
uint16_t ndescs;
+ uint16_t next;
};
struct virtqueue {
struct virtio_hw *hw; /**< virtio_hw structure pointer. */
struct vring vq_ring; /**< vring keeping desc, used and avail */
+ struct vring_packed ring_packed; /**< vring keeping descs */
+ bool avail_wrap_counter;
+ bool used_wrap_counter;
+ uint16_t event_flags_shadow;
+ uint16_t avail_used_flags;
/**
* Last consumed descriptor in the used table,
* trails vq_ring.used->idx.
--
2.17.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v13 02/10] net/virtio: add packed virtqueue helpers
2018-12-14 15:59 [dpdk-dev] [PATCH v13 00/10] implement packed virtqueues Jens Freimann
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 01/10] net/virtio: add packed virtqueue defines Jens Freimann
@ 2018-12-14 15:59 ` Jens Freimann
2018-12-17 16:09 ` Maxime Coquelin
2018-12-17 16:30 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 03/10] net/virtio: vring init for packed queues Jens Freimann
` (7 subsequent siblings)
9 siblings, 2 replies; 22+ messages in thread
From: Jens Freimann @ 2018-12-14 15:59 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Add helper functions to set/clear and check descriptor flags.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
drivers/net/virtio/virtio_pci.h | 6 +++
drivers/net/virtio/virtqueue.h | 91 ++++++++++++++++++++++++++++++++-
2 files changed, 95 insertions(+), 2 deletions(-)
diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
index 4c975a531..b22b62dad 100644
--- a/drivers/net/virtio/virtio_pci.h
+++ b/drivers/net/virtio/virtio_pci.h
@@ -315,6 +315,12 @@ vtpci_with_feature(struct virtio_hw *hw, uint64_t bit)
return (hw->guest_features & (1ULL << bit)) != 0;
}
+static inline int
+vtpci_packed_queue(struct virtio_hw *hw)
+{
+ return vtpci_with_feature(hw, VIRTIO_F_RING_PACKED);
+}
+
/*
* Function declaration from virtio_pci.c
*/
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index d4e0858e4..19fabae0b 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -251,6 +251,44 @@ struct virtio_tx_region {
__attribute__((__aligned__(16)));
};
+static inline void
+_set_desc_avail(struct vring_packed_desc *desc, int wrap_counter)
+{
+ desc->flags |= VRING_DESC_F_AVAIL(wrap_counter) |
+ VRING_DESC_F_USED(!wrap_counter);
+}
+
+static inline void
+set_desc_avail(struct virtqueue *vq, struct vring_packed_desc *desc)
+{
+ _set_desc_avail(desc, vq->avail_wrap_counter);
+}
+
+static inline int
+desc_is_used(struct vring_packed_desc *desc, struct virtqueue *vq)
+{
+ uint16_t used, avail, flags;
+
+ flags = desc->flags;
+ used = !!(flags & VRING_DESC_F_USED(1));
+ avail = !!(flags & VRING_DESC_F_AVAIL(1));
+
+ return avail == used && used == vq->used_wrap_counter;
+}
+
+
+static inline void
+vring_desc_init_packed(struct virtqueue *vq, int n)
+{
+ int i;
+ for (i = 0; i < n - 1; i++) {
+ vq->ring_packed.desc_packed[i].id = i;
+ vq->vq_descx[i].next = i + 1;
+ }
+ vq->ring_packed.desc_packed[i].id = i;
+ vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END;
+}
+
/* Chain all the descriptors in the ring with an END */
static inline void
vring_desc_init(struct vring_desc *dp, uint16_t n)
@@ -262,13 +300,59 @@ vring_desc_init(struct vring_desc *dp, uint16_t n)
dp[i].next = VQ_RING_DESC_CHAIN_END;
}
+/**
+ * Tell the backend not to interrupt us.
+ */
+static inline void
+virtqueue_disable_intr_packed(struct virtqueue *vq)
+{
+ uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags;
+
+ if (*event_flags != RING_EVENT_FLAGS_DISABLE)
+ *event_flags = RING_EVENT_FLAGS_DISABLE;
+}
+
+
/**
* Tell the backend not to interrupt us.
*/
static inline void
virtqueue_disable_intr(struct virtqueue *vq)
{
- vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
+ if (vtpci_packed_queue(vq->hw))
+ virtqueue_disable_intr_packed(vq);
+ else
+ vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
+}
+
+/**
+ * Tell the backend to interrupt. Implementation for packed virtqueues.
+ */
+static inline void
+virtqueue_enable_intr_packed(struct virtqueue *vq)
+{
+ uint16_t *off_wrap = &vq->ring_packed.driver_event->desc_event_off_wrap;
+ uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags;
+
+ *off_wrap = vq->vq_used_cons_idx |
+ ((uint16_t)(vq->used_wrap_counter << 15));
+
+ if (vq->event_flags_shadow == RING_EVENT_FLAGS_DISABLE) {
+ virtio_wmb();
+ vq->event_flags_shadow =
+ vtpci_with_feature(vq->hw, VIRTIO_RING_F_EVENT_IDX) ?
+ RING_EVENT_FLAGS_DESC : RING_EVENT_FLAGS_ENABLE;
+ *event_flags = vq->event_flags_shadow;
+ }
+}
+
+/**
+ * Tell the backend to interrupt. Implementation for split virtqueues.
+ */
+static inline void
+virtqueue_enable_intr_split(struct virtqueue *vq)
+{
+ vq->vq_ring.avail->flags &= (~VRING_AVAIL_F_NO_INTERRUPT);
}
/**
@@ -277,7 +361,10 @@ virtqueue_disable_intr(struct virtqueue *vq)
static inline void
virtqueue_enable_intr(struct virtqueue *vq)
{
- vq->vq_ring.avail->flags &= (~VRING_AVAIL_F_NO_INTERRUPT);
+ if (vtpci_packed_queue(vq->hw))
+ virtqueue_enable_intr_packed(vq);
+ else
+ virtqueue_enable_intr_split(vq);
}
/**
--
2.17.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v13 03/10] net/virtio: vring init for packed queues
2018-12-14 15:59 [dpdk-dev] [PATCH v13 00/10] implement packed virtqueues Jens Freimann
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 01/10] net/virtio: add packed virtqueue defines Jens Freimann
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 02/10] net/virtio: add packed virtqueue helpers Jens Freimann
@ 2018-12-14 15:59 ` Jens Freimann
2018-12-17 16:15 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 04/10] net/virtio: dump packed virtqueue data Jens Freimann
` (6 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Jens Freimann @ 2018-12-14 15:59 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Add and initialize descriptor data structures.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 32 ++++++++++++++++++----------
drivers/net/virtio/virtio_ring.h | 34 ++++++++++++++++++++++++------
drivers/net/virtio/virtqueue.h | 2 +-
3 files changed, 49 insertions(+), 19 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index cb2b2e0bf..e6ba1282b 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -299,20 +299,22 @@ virtio_init_vring(struct virtqueue *vq)
PMD_INIT_FUNC_TRACE();
- /*
- * Reinitialise since virtio port might have been stopped and restarted
- */
memset(ring_mem, 0, vq->vq_ring_size);
- vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
+
vq->vq_used_cons_idx = 0;
vq->vq_desc_head_idx = 0;
vq->vq_avail_idx = 0;
vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
vq->vq_free_cnt = vq->vq_nentries;
memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
-
- vring_desc_init(vr->desc, size);
-
+ if (vtpci_packed_queue(vq->hw)) {
+ vring_init_packed(&vq->ring_packed, ring_mem,
+ VIRTIO_PCI_VRING_ALIGN, size);
+ vring_desc_init_packed(vq, size);
+ } else {
+ vring_init_split(vr, ring_mem, VIRTIO_PCI_VRING_ALIGN, size);
+ vring_desc_init_split(vr->desc, size);
+ }
/*
* Disable device(host) interrupting guest
*/
@@ -384,11 +386,16 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
vq->hw = hw;
vq->vq_queue_index = vtpci_queue_idx;
vq->vq_nentries = vq_size;
+ vq->event_flags_shadow = 0;
+ if (vtpci_packed_queue(hw)) {
+ vq->avail_wrap_counter = 1;
+ vq->used_wrap_counter = 1;
+ }
/*
* Reserve a memzone for vring elements
*/
- size = vring_size(vq_size, VIRTIO_PCI_VRING_ALIGN);
+ size = vring_size(hw, vq_size, VIRTIO_PCI_VRING_ALIGN);
vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d",
size, vq->vq_ring_size);
@@ -491,7 +498,8 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
for (i = 0; i < vq_size; i++) {
struct vring_desc *start_dp = txr[i].tx_indir;
- vring_desc_init(start_dp, RTE_DIM(txr[i].tx_indir));
+ vring_desc_init_split(start_dp,
+ RTE_DIM(txr[i].tx_indir));
/* first indirect descriptor is always the tx header */
start_dp->addr = txvq->virtio_net_hdr_mem
@@ -1488,7 +1496,8 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
/* Setting up rx_header size for the device */
if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF) ||
- vtpci_with_feature(hw, VIRTIO_F_VERSION_1))
+ vtpci_with_feature(hw, VIRTIO_F_VERSION_1) ||
+ vtpci_with_feature(hw, VIRTIO_F_RING_PACKED))
hw->vtnet_hdr_size = sizeof(struct virtio_net_hdr_mrg_rxbuf);
else
hw->vtnet_hdr_size = sizeof(struct virtio_net_hdr);
@@ -1908,7 +1917,8 @@ virtio_dev_configure(struct rte_eth_dev *dev)
if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) {
hw->use_inorder_tx = 1;
- if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
+ if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF) &&
+ !vtpci_packed_queue(hw)) {
hw->use_inorder_rx = 1;
hw->use_simple_rx = 0;
} else {
diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index 464449074..9ab007c11 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -86,7 +86,7 @@ struct vring_packed {
struct vring {
unsigned int num;
- struct vring_desc *desc;
+ struct vring_desc *desc;
struct vring_avail *avail;
struct vring_used *used;
};
@@ -125,10 +125,18 @@ struct vring {
#define vring_avail_event(vr) (*(uint16_t *)&(vr)->used->ring[(vr)->num])
static inline size_t
-vring_size(unsigned int num, unsigned long align)
+vring_size(struct virtio_hw *hw, unsigned int num, unsigned long align)
{
size_t size;
+ if (vtpci_packed_queue(hw)) {
+ size = num * sizeof(struct vring_packed_desc);
+ size += sizeof(struct vring_packed_desc_event);
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct vring_packed_desc_event);
+ return size;
+ }
+
size = num * sizeof(struct vring_desc);
size += sizeof(struct vring_avail) + (num * sizeof(uint16_t));
size = RTE_ALIGN_CEIL(size, align);
@@ -136,17 +144,29 @@ vring_size(unsigned int num, unsigned long align)
(num * sizeof(struct vring_used_elem));
return size;
}
-
static inline void
-vring_init(struct vring *vr, unsigned int num, uint8_t *p,
- unsigned long align)
+vring_init_split(struct vring *vr, uint8_t *p, unsigned long align,
+ unsigned int num)
{
vr->num = num;
vr->desc = (struct vring_desc *) p;
vr->avail = (struct vring_avail *) (p +
- num * sizeof(struct vring_desc));
+ vr->num * sizeof(struct vring_desc));
vr->used = (void *)
- RTE_ALIGN_CEIL((uintptr_t)(&vr->avail->ring[num]), align);
+ RTE_ALIGN_CEIL((uintptr_t)(&vr->avail->ring[vr->num]), align);
+}
+
+static inline void
+vring_init_packed(struct vring_packed *vr, uint8_t *p, unsigned long align,
+ unsigned int num)
+{
+ vr->num = num;
+ vr->desc_packed = (struct vring_packed_desc *)p;
+ vr->driver_event = (struct vring_packed_desc_event *)(p +
+ vr->num * sizeof(struct vring_packed_desc));
+ vr->device_event = (struct vring_packed_desc_event *)
+ RTE_ALIGN_CEIL((uintptr_t)(vr->driver_event +
+ sizeof(struct vring_packed_desc_event)), align);
}
/*
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 19fabae0b..809d15879 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -291,7 +291,7 @@ vring_desc_init_packed(struct virtqueue *vq, int n)
/* Chain all the descriptors in the ring with an END */
static inline void
-vring_desc_init(struct vring_desc *dp, uint16_t n)
+vring_desc_init_split(struct vring_desc *dp, uint16_t n)
{
uint16_t i;
--
2.17.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v13 04/10] net/virtio: dump packed virtqueue data
2018-12-14 15:59 [dpdk-dev] [PATCH v13 00/10] implement packed virtqueues Jens Freimann
` (2 preceding siblings ...)
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 03/10] net/virtio: vring init for packed queues Jens Freimann
@ 2018-12-14 15:59 ` Jens Freimann
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 05/10] net/virtio: implement transmit path for packed queues Jens Freimann
` (5 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Jens Freimann @ 2018-12-14 15:59 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Add support to dump packed virtqueue data to the
VIRTQUEUE_DUMP() macro.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtqueue.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 809d15879..9c65ad54f 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -448,6 +448,15 @@ virtqueue_notify(struct virtqueue *vq)
uint16_t used_idx, nused; \
used_idx = (vq)->vq_ring.used->idx; \
nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \
+ if (vtpci_packed_queue((vq)->hw)) { \
+ PMD_INIT_LOG(DEBUG, \
+ "VQ: - size=%d; free=%d; used_cons_idx=%d; avail_idx=%d;" \
+ "VQ: - avail_wrap_counter=%d; used_wrap_counter=%d", \
+ (vq)->vq_nentries, (vq)->vq_free_cnt, (vq)->vq_used_cons_idx, \
+ (vq)->vq_avail_idx, (vq)->avail_wrap_counter, \
+ (vq)->used_wrap_counter); \
+ break; \
+ } \
PMD_INIT_LOG(DEBUG, \
"VQ: - size=%d; free=%d; used=%d; desc_head_idx=%d;" \
" avail.idx=%d; used_cons_idx=%d; used.idx=%d;" \
--
2.17.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v13 05/10] net/virtio: implement transmit path for packed queues
2018-12-14 15:59 [dpdk-dev] [PATCH v13 00/10] implement packed virtqueues Jens Freimann
` (3 preceding siblings ...)
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 04/10] net/virtio: dump packed virtqueue data Jens Freimann
@ 2018-12-14 15:59 ` Jens Freimann
2018-12-17 16:35 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 06/10] net/virtio: implement receive " Jens Freimann
` (4 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Jens Freimann @ 2018-12-14 15:59 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
This implements the transmit path for devices with
support for packed virtqueues.
Signed-off-by: Jens Freiman <jfreimann@redhat.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 57 ++++---
drivers/net/virtio/virtio_ethdev.h | 2 +
drivers/net/virtio/virtio_rxtx.c | 236 ++++++++++++++++++++++++++++-
drivers/net/virtio/virtqueue.h | 20 ++-
4 files changed, 293 insertions(+), 22 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index e6ba1282b..9f1b72e56 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -390,6 +390,9 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
if (vtpci_packed_queue(hw)) {
vq->avail_wrap_counter = 1;
vq->used_wrap_counter = 1;
+ vq->avail_used_flags =
+ VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
+ VRING_DESC_F_USED(!vq->avail_wrap_counter);
}
/*
@@ -497,17 +500,26 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
memset(txr, 0, vq_size * sizeof(*txr));
for (i = 0; i < vq_size; i++) {
struct vring_desc *start_dp = txr[i].tx_indir;
-
- vring_desc_init_split(start_dp,
- RTE_DIM(txr[i].tx_indir));
+ struct vring_packed_desc *start_dp_packed =
+ txr[i].tx_indir_pq;
/* first indirect descriptor is always the tx header */
- start_dp->addr = txvq->virtio_net_hdr_mem
- + i * sizeof(*txr)
- + offsetof(struct virtio_tx_region, tx_hdr);
-
- start_dp->len = hw->vtnet_hdr_size;
- start_dp->flags = VRING_DESC_F_NEXT;
+ if (vtpci_packed_queue(hw)) {
+ start_dp_packed->addr = txvq->virtio_net_hdr_mem
+ + i * sizeof(*txr)
+ + offsetof(struct virtio_tx_region,
+ tx_hdr);
+ start_dp_packed->len = hw->vtnet_hdr_size;
+ } else {
+ vring_desc_init_split(start_dp,
+ RTE_DIM(txr[i].tx_indir));
+ start_dp->addr = txvq->virtio_net_hdr_mem
+ + i * sizeof(*txr)
+ + offsetof(struct virtio_tx_region,
+ tx_hdr);
+ start_dp->len = hw->vtnet_hdr_size;
+ start_dp->flags = VRING_DESC_F_NEXT;
+ }
}
}
@@ -1336,6 +1348,23 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
{
struct virtio_hw *hw = eth_dev->data->dev_private;
+ if (vtpci_packed_queue(hw)) {
+ PMD_INIT_LOG(INFO,
+ "virtio: using packed ring standard Tx path on port %u",
+ eth_dev->data->port_id);
+ eth_dev->tx_pkt_burst = virtio_xmit_pkts_packed;
+ } else {
+ if (hw->use_inorder_tx) {
+ PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
+ eth_dev->data->port_id);
+ eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
+ } else {
+ PMD_INIT_LOG(INFO, "virtio: using standard Tx path on port %u",
+ eth_dev->data->port_id);
+ eth_dev->tx_pkt_burst = virtio_xmit_pkts;
+ }
+ }
+
if (hw->use_simple_rx) {
PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u",
eth_dev->data->port_id);
@@ -1356,15 +1385,7 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
eth_dev->rx_pkt_burst = &virtio_recv_pkts;
}
- if (hw->use_inorder_tx) {
- PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
- eth_dev->data->port_id);
- eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
- } else {
- PMD_INIT_LOG(INFO, "virtio: using standard Tx path on port %u",
- eth_dev->data->port_id);
- eth_dev->tx_pkt_burst = virtio_xmit_pkts;
- }
+
}
/* Only support 1:1 queue/interrupt mapping so far.
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index e0f80e5a4..05d355180 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -82,6 +82,8 @@ uint16_t virtio_recv_mergeable_pkts_inorder(void *rx_queue,
uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
+uint16_t virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
uint16_t virtio_xmit_pkts_inorder(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index cb8f89f18..9cc7ad2d0 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -88,6 +88,23 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
dp->next = VQ_RING_DESC_CHAIN_END;
}
+static void
+vq_ring_free_id_packed(struct virtqueue *vq, uint16_t id)
+{
+ struct vq_desc_extra *dxp;
+
+ dxp = &vq->vq_descx[id];
+ vq->vq_free_cnt += dxp->ndescs;
+
+ if (vq->vq_desc_tail_idx == VQ_RING_DESC_CHAIN_END)
+ vq->vq_desc_head_idx = id;
+ else
+ vq->vq_descx[vq->vq_desc_tail_idx].next = id;
+
+ vq->vq_desc_tail_idx = id;
+ dxp->next = VQ_RING_DESC_CHAIN_END;
+}
+
static uint16_t
virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
uint32_t *len, uint16_t num)
@@ -165,6 +182,33 @@ virtqueue_dequeue_rx_inorder(struct virtqueue *vq,
#endif
/* Cleanup from completed transmits. */
+static void
+virtio_xmit_cleanup_packed(struct virtqueue *vq, int num)
+{
+ uint16_t used_idx, id;
+ uint16_t size = vq->vq_nentries;
+ struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
+ struct vq_desc_extra *dxp;
+
+ used_idx = vq->vq_used_cons_idx;
+ while (num-- && desc_is_used(&desc[used_idx], vq)) {
+ used_idx = vq->vq_used_cons_idx;
+ id = desc[used_idx].id;
+ dxp = &vq->vq_descx[id];
+ vq->vq_used_cons_idx += dxp->ndescs;
+ if (vq->vq_used_cons_idx >= size) {
+ vq->vq_used_cons_idx -= size;
+ vq->used_wrap_counter ^= 1;
+ }
+ vq_ring_free_id_packed(vq, id);
+ if (dxp->cookie != NULL) {
+ rte_pktmbuf_free(dxp->cookie);
+ dxp->cookie = NULL;
+ }
+ used_idx = vq->vq_used_cons_idx;
+ }
+}
+
static void
virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num)
{
@@ -456,6 +500,107 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
vq->vq_desc_head_idx = idx & (vq->vq_nentries - 1);
}
+static inline void
+virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
+ uint16_t needed, int can_push)
+{
+ struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;
+ struct vq_desc_extra *dxp;
+ struct virtqueue *vq = txvq->vq;
+ struct vring_packed_desc *start_dp, *head_dp;
+ uint16_t idx, id, head_idx, head_flags;
+ uint16_t head_size = vq->hw->vtnet_hdr_size;
+ struct virtio_net_hdr *hdr;
+ uint16_t prev;
+
+ id = vq->vq_desc_head_idx;
+
+ dxp = &vq->vq_descx[id];
+ dxp->ndescs = needed;
+ dxp->cookie = cookie;
+
+ head_idx = vq->vq_avail_idx;
+ idx = head_idx;
+ prev = head_idx;
+ start_dp = vq->ring_packed.desc_packed;
+
+ head_dp = &vq->ring_packed.desc_packed[idx];
+ head_flags = cookie->next ? VRING_DESC_F_NEXT : 0;
+ head_flags |= vq->avail_used_flags;
+
+ if (can_push) {
+ /* prepend cannot fail, checked by caller */
+ hdr = (struct virtio_net_hdr *)
+ rte_pktmbuf_prepend(cookie, head_size);
+ /* rte_pktmbuf_prepend() counts the hdr size to the pkt length,
+ * which is wrong. Below subtract restores correct pkt size.
+ */
+ cookie->pkt_len -= head_size;
+
+ /* if offload disabled, it is not zeroed below, do it now */
+ if (!vq->hw->has_tx_offload) {
+ ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->flags, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);
+ }
+ } else {
+ /* setup first tx ring slot to point to header
+ * stored in reserved region.
+ */
+ start_dp[idx].addr = txvq->virtio_net_hdr_mem +
+ RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
+ start_dp[idx].len = vq->hw->vtnet_hdr_size;
+ hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;
+ idx++;
+ if (idx >= vq->vq_nentries) {
+ idx -= vq->vq_nentries;
+ vq->avail_wrap_counter ^= 1;
+ vq->avail_used_flags =
+ VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
+ VRING_DESC_F_USED(!vq->avail_wrap_counter);
+ }
+ }
+
+ virtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload);
+
+ do {
+ uint16_t flags;
+
+ start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
+ start_dp[idx].len = cookie->data_len;
+ if (likely(idx != head_idx)) {
+ flags = cookie->next ? VRING_DESC_F_NEXT : 0;
+ flags |= vq->avail_used_flags;
+ start_dp[idx].flags = flags;
+ }
+ prev = idx;
+ idx++;
+ if (idx >= vq->vq_nentries) {
+ idx -= vq->vq_nentries;
+ vq->avail_wrap_counter ^= 1;
+ vq->avail_used_flags =
+ VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
+ VRING_DESC_F_USED(!vq->avail_wrap_counter);
+ }
+ } while ((cookie = cookie->next) != NULL);
+
+ start_dp[prev].id = id;
+
+ vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed);
+
+ vq->vq_desc_head_idx = dxp->next;
+ if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ vq->vq_desc_tail_idx = VQ_RING_DESC_CHAIN_END;
+
+ vq->vq_avail_idx = idx;
+
+ rte_smp_wmb();
+ head_dp->flags = head_flags;
+}
+
static inline void
virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
uint16_t needed, int use_indirect, int can_push,
@@ -733,8 +878,10 @@ virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (hw->use_inorder_tx)
- vq->vq_ring.desc[vq->vq_nentries - 1].next = 0;
+ if (!vtpci_packed_queue(hw)) {
+ if (hw->use_inorder_tx)
+ vq->vq_ring.desc[vq->vq_nentries - 1].next = 0;
+ }
VIRTQUEUE_DUMP(vq);
@@ -1346,6 +1493,91 @@ virtio_recv_mergeable_pkts(void *rx_queue,
return nb_rx;
}
+uint16_t
+virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ struct virtnet_tx *txvq = tx_queue;
+ struct virtqueue *vq = txvq->vq;
+ struct virtio_hw *hw = vq->hw;
+ uint16_t hdr_size = hw->vtnet_hdr_size;
+ uint16_t nb_tx = 0;
+ int error;
+
+ if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts))
+ return nb_tx;
+
+ if (unlikely(nb_pkts < 1))
+ return nb_pkts;
+
+ PMD_TX_LOG(DEBUG, "%d packets to xmit", nb_pkts);
+
+ if (nb_pkts > vq->vq_free_cnt)
+ virtio_xmit_cleanup_packed(vq, nb_pkts - vq->vq_free_cnt);
+
+ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+ struct rte_mbuf *txm = tx_pkts[nb_tx];
+ int can_push = 0, slots, need;
+
+ /* Do VLAN tag insertion */
+ if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) {
+ error = rte_vlan_insert(&txm);
+ if (unlikely(error)) {
+ rte_pktmbuf_free(txm);
+ continue;
+ }
+ }
+
+ /* optimize ring usage */
+ if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) ||
+ vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) &&
+ rte_mbuf_refcnt_read(txm) == 1 &&
+ RTE_MBUF_DIRECT(txm) &&
+ txm->nb_segs == 1 &&
+ rte_pktmbuf_headroom(txm) >= hdr_size &&
+ rte_is_aligned(rte_pktmbuf_mtod(txm, char *),
+ __alignof__(struct virtio_net_hdr_mrg_rxbuf)))
+ can_push = 1;
+
+ /* How many main ring entries are needed to this Tx?
+ * any_layout => number of segments
+ * default => number of segments + 1
+ */
+ slots = txm->nb_segs + !can_push;
+ need = slots - vq->vq_free_cnt;
+
+ /* Positive value indicates it need free vring descriptors */
+ if (unlikely(need > 0)) {
+ virtio_rmb();
+ need = RTE_MIN(need, (int)nb_pkts);
+ virtio_xmit_cleanup_packed(vq, need);
+ need = slots - vq->vq_free_cnt;
+ if (unlikely(need > 0)) {
+ PMD_TX_LOG(ERR,
+ "No free tx descriptors to transmit");
+ break;
+ }
+ }
+
+ /* Enqueue Packet buffers */
+ virtqueue_enqueue_xmit_packed(txvq, txm, slots, can_push);
+
+ txvq->stats.bytes += txm->pkt_len;
+ virtio_update_packet_stats(&txvq->stats, txm);
+ }
+
+ txvq->stats.packets += nb_tx;
+
+ if (likely(nb_tx)) {
+ if (unlikely(virtqueue_kick_prepare_packed(vq))) {
+ virtqueue_notify(vq);
+ PMD_TX_LOG(DEBUG, "Notified backend after xmit");
+ }
+ }
+
+ return nb_tx;
+}
+
uint16_t
virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 9c65ad54f..3cd7d6cf9 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -247,8 +247,12 @@ struct virtio_net_hdr_mrg_rxbuf {
#define VIRTIO_MAX_TX_INDIRECT 8
struct virtio_tx_region {
struct virtio_net_hdr_mrg_rxbuf tx_hdr;
- struct vring_desc tx_indir[VIRTIO_MAX_TX_INDIRECT]
- __attribute__((__aligned__(16)));
+ union {
+ struct vring_desc tx_indir[VIRTIO_MAX_TX_INDIRECT]
+ __attribute__((__aligned__(16)));
+ struct vring_packed_desc tx_indir_pq[VIRTIO_MAX_TX_INDIRECT]
+ __attribute__((__aligned__(16)));
+ };
};
static inline void
@@ -399,6 +403,7 @@ virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx)
#define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx))
void vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx);
+void vq_ring_free_chain_packed(struct virtqueue *vq, uint16_t used_idx);
void vq_ring_free_inorder(struct virtqueue *vq, uint16_t desc_idx,
uint16_t num);
@@ -432,6 +437,17 @@ virtqueue_kick_prepare(struct virtqueue *vq)
return !(vq->vq_ring.used->flags & VRING_USED_F_NO_NOTIFY);
}
+static inline int
+virtqueue_kick_prepare_packed(struct virtqueue *vq)
+{
+ uint16_t flags;
+
+ virtio_mb();
+ flags = vq->ring_packed.device_event->desc_event_flags;
+
+ return flags != RING_EVENT_FLAGS_DISABLE;
+}
+
static inline void
virtqueue_notify(struct virtqueue *vq)
{
--
2.17.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v13 06/10] net/virtio: implement receive path for packed queues
2018-12-14 15:59 [dpdk-dev] [PATCH v13 00/10] implement packed virtqueues Jens Freimann
` (4 preceding siblings ...)
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 05/10] net/virtio: implement transmit path for packed queues Jens Freimann
@ 2018-12-14 15:59 ` Jens Freimann
2018-12-17 16:46 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 07/10] net/virtio: add virtio send command packed queue support Jens Freimann
` (3 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Jens Freimann @ 2018-12-14 15:59 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Implement the receive part.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 58 +++--
drivers/net/virtio/virtio_ethdev.h | 5 +
drivers/net/virtio/virtio_rxtx.c | 375 ++++++++++++++++++++++++++++-
drivers/net/virtio/virtqueue.c | 43 +++-
4 files changed, 457 insertions(+), 24 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 9f1b72e56..0394ac0af 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1365,27 +1365,41 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
}
}
- if (hw->use_simple_rx) {
- PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u",
- eth_dev->data->port_id);
- eth_dev->rx_pkt_burst = virtio_recv_pkts_vec;
- } else if (hw->use_inorder_rx) {
- PMD_INIT_LOG(INFO,
- "virtio: using inorder mergeable buffer Rx path on port %u",
- eth_dev->data->port_id);
- eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts_inorder;
- } else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
- PMD_INIT_LOG(INFO,
- "virtio: using mergeable buffer Rx path on port %u",
- eth_dev->data->port_id);
- eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts;
+ if (vtpci_packed_queue(hw)) {
+ if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
+ PMD_INIT_LOG(INFO,
+ "virtio: using packed ring mergeable buffer Rx path on port %u",
+ eth_dev->data->port_id);
+ eth_dev->rx_pkt_burst =
+ &virtio_recv_mergeable_pkts_packed;
+ } else {
+ PMD_INIT_LOG(INFO,
+ "virtio: using packed ring standard Rx path on port %u",
+ eth_dev->data->port_id);
+ eth_dev->rx_pkt_burst = &virtio_recv_pkts_packed;
+ }
} else {
- PMD_INIT_LOG(INFO, "virtio: using standard Rx path on port %u",
- eth_dev->data->port_id);
- eth_dev->rx_pkt_burst = &virtio_recv_pkts;
+ if (hw->use_simple_rx) {
+ PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u",
+ eth_dev->data->port_id);
+ eth_dev->rx_pkt_burst = virtio_recv_pkts_vec;
+ } else if (hw->use_inorder_rx) {
+ PMD_INIT_LOG(INFO,
+ "virtio: using inorder mergeable buffer Rx path on port %u",
+ eth_dev->data->port_id);
+ eth_dev->rx_pkt_burst =
+ &virtio_recv_mergeable_pkts_inorder;
+ } else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
+ PMD_INIT_LOG(INFO,
+ "virtio: using mergeable buffer Rx path on port %u",
+ eth_dev->data->port_id);
+ eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts;
+ } else {
+ PMD_INIT_LOG(INFO, "virtio: using standard Rx path on port %u",
+ eth_dev->data->port_id);
+ eth_dev->rx_pkt_burst = &virtio_recv_pkts;
+ }
}
-
-
}
/* Only support 1:1 queue/interrupt mapping so far.
@@ -1947,6 +1961,12 @@ virtio_dev_configure(struct rte_eth_dev *dev)
}
}
+ if (vtpci_packed_queue(hw)) {
+ hw->use_simple_rx = 0;
+ hw->use_inorder_rx = 0;
+ hw->use_inorder_tx = 0;
+ }
+
#if defined RTE_ARCH_ARM64 || defined RTE_ARCH_ARM
if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON)) {
hw->use_simple_rx = 0;
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index 05d355180..88b8c42a3 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -73,10 +73,15 @@ int virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev,
uint16_t virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
+uint16_t virtio_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
uint16_t virtio_recv_mergeable_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
+uint16_t virtio_recv_mergeable_pkts_packed(void *rx_queue,
+ struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
uint16_t virtio_recv_mergeable_pkts_inorder(void *rx_queue,
struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 9cc7ad2d0..8564f18a7 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -31,6 +31,7 @@
#include "virtqueue.h"
#include "virtio_rxtx.h"
#include "virtio_rxtx_simple.h"
+#include "virtio_ring.h"
#ifdef RTE_LIBRTE_VIRTIO_DEBUG_DUMP
#define VIRTIO_DUMP_PACKET(m, len) rte_pktmbuf_dump(stdout, m, len)
@@ -105,6 +106,47 @@ vq_ring_free_id_packed(struct virtqueue *vq, uint16_t id)
dxp->next = VQ_RING_DESC_CHAIN_END;
}
+static uint16_t
+virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq,
+ struct rte_mbuf **rx_pkts,
+ uint32_t *len,
+ uint16_t num)
+{
+ struct rte_mbuf *cookie;
+ uint16_t used_idx;
+ uint16_t id;
+ struct vring_packed_desc *desc;
+ uint16_t i;
+
+ desc = vq->ring_packed.desc_packed;
+
+ for (i = 0; i < num; i++) {
+ used_idx = vq->vq_used_cons_idx;
+ if (!desc_is_used(&desc[used_idx], vq))
+ return i;
+ len[i] = desc[used_idx].len;
+ id = desc[used_idx].id;
+ cookie = (struct rte_mbuf *)vq->vq_descx[id].cookie;
+ if (unlikely(cookie == NULL)) {
+ PMD_DRV_LOG(ERR, "vring descriptor with no mbuf cookie at %u",
+ vq->vq_used_cons_idx);
+ break;
+ }
+ rte_prefetch0(cookie);
+ rte_packet_prefetch(rte_pktmbuf_mtod(cookie, void *));
+ rx_pkts[i] = cookie;
+
+ vq->vq_free_cnt++;
+ vq->vq_used_cons_idx++;
+ if (vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx -= vq->vq_nentries;
+ vq->used_wrap_counter ^= 1;
+ }
+ }
+
+ return i;
+}
+
static uint16_t
virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
uint32_t *len, uint16_t num)
@@ -350,6 +392,51 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie)
return 0;
}
+static inline int
+virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
+ struct rte_mbuf **cookie, uint16_t num)
+{
+ struct vring_packed_desc *start_dp = vq->ring_packed.desc_packed;
+ uint16_t flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
+ struct virtio_hw *hw = vq->hw;
+ struct vq_desc_extra *dxp;
+ uint16_t idx;
+ int i;
+
+ if (unlikely(vq->vq_free_cnt == 0))
+ return -ENOSPC;
+ if (unlikely(vq->vq_free_cnt < num))
+ return -EMSGSIZE;
+
+ for (i = 0; i < num; i++) {
+ idx = vq->vq_avail_idx;
+ dxp = &vq->vq_descx[idx];
+ dxp->cookie = (void *)cookie[i];
+ dxp->ndescs = 1;
+
+ start_dp[idx].addr = VIRTIO_MBUF_ADDR(cookie[i], vq) +
+ RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
+ start_dp[idx].len = cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM
+ + hw->vtnet_hdr_size;
+
+ vq->vq_desc_head_idx = dxp->next;
+ if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ vq->vq_desc_tail_idx = vq->vq_desc_head_idx;
+ rte_smp_wmb();
+ start_dp[idx].flags = flags;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->avail_wrap_counter ^= 1;
+ vq->avail_used_flags =
+ VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
+ VRING_DESC_F_USED(!vq->avail_wrap_counter);
+ flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
+ }
+ }
+ vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num);
+ return 0;
+}
+
/* When doing TSO, the IP length is not included in the pseudo header
* checksum of the packet given to the PMD, but for virtio it is
* expected.
@@ -801,7 +888,11 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)
break;
/* Enqueue allocated buffers */
- error = virtqueue_enqueue_recv_refill(vq, m);
+ if (vtpci_packed_queue(vq->hw))
+ error = virtqueue_enqueue_recv_refill_packed(vq,
+ &m, 1);
+ else
+ error = virtqueue_enqueue_recv_refill(vq, m);
if (error) {
rte_pktmbuf_free(m);
break;
@@ -809,7 +900,8 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)
nbufs++;
}
- vq_update_avail_idx(vq);
+ if (!vtpci_packed_queue(vq->hw))
+ vq_update_avail_idx(vq);
}
PMD_INIT_LOG(DEBUG, "Allocated %d bufs", nbufs);
@@ -896,7 +988,10 @@ virtio_discard_rxbuf(struct virtqueue *vq, struct rte_mbuf *m)
* Requeue the discarded mbuf. This should always be
* successful since it was just dequeued.
*/
- error = virtqueue_enqueue_recv_refill(vq, m);
+ if (vtpci_packed_queue(vq->hw))
+ error = virtqueue_enqueue_recv_refill_packed(vq, &m, 1);
+ else
+ error = virtqueue_enqueue_recv_refill(vq, m);
if (unlikely(error)) {
RTE_LOG(ERR, PMD, "cannot requeue discarded mbuf");
@@ -1136,6 +1231,104 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
return nb_rx;
}
+uint16_t
+virtio_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct virtnet_rx *rxvq = rx_queue;
+ struct virtqueue *vq = rxvq->vq;
+ struct virtio_hw *hw = vq->hw;
+ struct rte_mbuf *rxm, *new_mbuf;
+ uint16_t num, nb_rx;
+ uint32_t len[VIRTIO_MBUF_BURST_SZ];
+ struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ];
+ int error;
+ uint32_t i, nb_enqueued;
+ uint32_t hdr_size;
+ struct virtio_net_hdr *hdr;
+
+ nb_rx = 0;
+ if (unlikely(hw->started == 0))
+ return nb_rx;
+
+ num = RTE_MIN(VIRTIO_MBUF_BURST_SZ, nb_pkts);
+ if (likely(num > DESC_PER_CACHELINE))
+ num = num - ((vq->vq_used_cons_idx + num) % DESC_PER_CACHELINE);
+
+ num = virtqueue_dequeue_burst_rx_packed(vq, rcv_pkts, len, num);
+ PMD_RX_LOG(DEBUG, "dequeue:%d", num);
+
+ nb_enqueued = 0;
+ hdr_size = hw->vtnet_hdr_size;
+
+ for (i = 0; i < num; i++) {
+ rxm = rcv_pkts[i];
+
+ PMD_RX_LOG(DEBUG, "packet len:%d", len[i]);
+
+ if (unlikely(len[i] < hdr_size + ETHER_HDR_LEN)) {
+ PMD_RX_LOG(ERR, "Packet drop");
+ nb_enqueued++;
+ virtio_discard_rxbuf(vq, rxm);
+ rxvq->stats.errors++;
+ continue;
+ }
+
+ rxm->port = rxvq->port_id;
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rxm->ol_flags = 0;
+ rxm->vlan_tci = 0;
+
+ rxm->pkt_len = (uint32_t)(len[i] - hdr_size);
+ rxm->data_len = (uint16_t)(len[i] - hdr_size);
+
+ hdr = (struct virtio_net_hdr *)((char *)rxm->buf_addr +
+ RTE_PKTMBUF_HEADROOM - hdr_size);
+
+ if (hw->vlan_strip)
+ rte_vlan_strip(rxm);
+
+ if (hw->has_rx_offload && virtio_rx_offload(rxm, hdr) < 0) {
+ virtio_discard_rxbuf(vq, rxm);
+ rxvq->stats.errors++;
+ continue;
+ }
+
+ virtio_rx_stats_updated(rxvq, rxm);
+
+ rx_pkts[nb_rx++] = rxm;
+ }
+
+ rxvq->stats.packets += nb_rx;
+
+ /* Allocate new mbuf for the used descriptor */
+ while (likely(!virtqueue_full(vq))) {
+ new_mbuf = rte_mbuf_raw_alloc(rxvq->mpool);
+ if (unlikely(new_mbuf == NULL)) {
+ struct rte_eth_dev *dev =
+ &rte_eth_devices[rxvq->port_id];
+ dev->data->rx_mbuf_alloc_failed++;
+ break;
+ }
+ error = virtqueue_enqueue_recv_refill_packed(vq, &new_mbuf, 1);
+ if (unlikely(error)) {
+ rte_pktmbuf_free(new_mbuf);
+ break;
+ }
+ nb_enqueued++;
+ }
+
+ if (likely(nb_enqueued)) {
+ if (unlikely(virtqueue_kick_prepare_packed(vq))) {
+ virtqueue_notify(vq);
+ PMD_RX_LOG(DEBUG, "Notified");
+ }
+ }
+
+ return nb_rx;
+}
+
+
uint16_t
virtio_recv_mergeable_pkts_inorder(void *rx_queue,
struct rte_mbuf **rx_pkts,
@@ -1493,6 +1686,182 @@ virtio_recv_mergeable_pkts(void *rx_queue,
return nb_rx;
}
+uint16_t
+virtio_recv_mergeable_pkts_packed(void *rx_queue,
+ struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct virtnet_rx *rxvq = rx_queue;
+ struct virtqueue *vq = rxvq->vq;
+ struct virtio_hw *hw = vq->hw;
+ struct rte_mbuf *rxm;
+ struct rte_mbuf *prev = NULL;
+ uint16_t num, nb_rx = 0;
+ uint32_t len[VIRTIO_MBUF_BURST_SZ];
+ struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ];
+ uint32_t nb_enqueued = 0;
+ uint32_t seg_num = 0;
+ uint32_t seg_res = 0;
+ uint32_t hdr_size = hw->vtnet_hdr_size;
+ int32_t i;
+ int error;
+
+ if (unlikely(hw->started == 0))
+ return nb_rx;
+
+
+ num = nb_pkts;
+ if (unlikely(num > VIRTIO_MBUF_BURST_SZ))
+ num = VIRTIO_MBUF_BURST_SZ;
+ if (likely(num > DESC_PER_CACHELINE))
+ num = num - ((vq->vq_used_cons_idx + num) % DESC_PER_CACHELINE);
+
+ num = virtqueue_dequeue_burst_rx_packed(vq, rcv_pkts, len, num);
+
+ for (i = 0; i < num; i++) {
+ struct virtio_net_hdr_mrg_rxbuf *header;
+
+ PMD_RX_LOG(DEBUG, "dequeue:%d", num);
+ PMD_RX_LOG(DEBUG, "packet len:%d", len[i]);
+
+ rxm = rcv_pkts[i];
+
+ if (unlikely(len[i] < hdr_size + ETHER_HDR_LEN)) {
+ PMD_RX_LOG(ERR, "Packet drop");
+ nb_enqueued++;
+ virtio_discard_rxbuf(vq, rxm);
+ rxvq->stats.errors++;
+ continue;
+ }
+
+ header = (struct virtio_net_hdr_mrg_rxbuf *)((char *)
+ rxm->buf_addr + RTE_PKTMBUF_HEADROOM - hdr_size);
+ seg_num = header->num_buffers;
+
+ if (seg_num == 0)
+ seg_num = 1;
+
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rxm->nb_segs = seg_num;
+ rxm->ol_flags = 0;
+ rxm->vlan_tci = 0;
+ rxm->pkt_len = (uint32_t)(len[i] - hdr_size);
+ rxm->data_len = (uint16_t)(len[i] - hdr_size);
+
+ rxm->port = rxvq->port_id;
+ rx_pkts[nb_rx] = rxm;
+ prev = rxm;
+
+ if (hw->has_rx_offload &&
+ virtio_rx_offload(rxm, &header->hdr) < 0) {
+ virtio_discard_rxbuf(vq, rxm);
+ rxvq->stats.errors++;
+ continue;
+ }
+
+ if (hw->vlan_strip)
+ rte_vlan_strip(rx_pkts[nb_rx]);
+
+ seg_res = seg_num - 1;
+
+ /* Merge remaining segments */
+ while (seg_res != 0 && i < (num - 1)) {
+ i++;
+
+ rxm = rcv_pkts[i];
+ rxm->data_off = RTE_PKTMBUF_HEADROOM - hdr_size;
+ rxm->pkt_len = (uint32_t)(len[i]);
+ rxm->data_len = (uint16_t)(len[i]);
+
+ rx_pkts[nb_rx]->pkt_len += (uint32_t)(len[i]);
+ rx_pkts[nb_rx]->data_len += (uint16_t)(len[i]);
+
+ if (prev)
+ prev->next = rxm;
+
+ prev = rxm;
+ seg_res -= 1;
+ }
+
+ if (!seg_res) {
+ virtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]);
+ nb_rx++;
+ }
+ }
+
+ /* Last packet still need merge segments */
+ while (seg_res != 0) {
+ uint16_t rcv_cnt = RTE_MIN((uint16_t)seg_res,
+ VIRTIO_MBUF_BURST_SZ);
+ if (likely(vq->vq_free_cnt >= rcv_cnt)) {
+ num = virtqueue_dequeue_burst_rx_packed(vq, rcv_pkts,
+ len, rcv_cnt);
+ uint16_t extra_idx = 0;
+
+ rcv_cnt = num;
+
+ while (extra_idx < rcv_cnt) {
+ rxm = rcv_pkts[extra_idx];
+
+ rxm->data_off =
+ RTE_PKTMBUF_HEADROOM - hdr_size;
+ rxm->pkt_len = (uint32_t)(len[extra_idx]);
+ rxm->data_len = (uint16_t)(len[extra_idx]);
+
+ prev->next = rxm;
+ prev = rxm;
+ rx_pkts[nb_rx]->pkt_len += len[extra_idx];
+ rx_pkts[nb_rx]->data_len += len[extra_idx];
+ extra_idx += 1;
+ }
+ seg_res -= rcv_cnt;
+ if (!seg_res) {
+ virtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]);
+ nb_rx++;
+ }
+ } else {
+ PMD_RX_LOG(ERR,
+ "No enough segments for packet.");
+ if (prev)
+ virtio_discard_rxbuf(vq, prev);
+ rxvq->stats.errors++;
+ break;
+ }
+ }
+
+ rxvq->stats.packets += nb_rx;
+
+ /* Allocate new mbuf for the used descriptor */
+ if (likely(!virtqueue_full(vq))) {
+ /* free_cnt may include mrg descs */
+ uint16_t free_cnt = vq->vq_free_cnt;
+ struct rte_mbuf *new_pkts[free_cnt];
+
+ if (!rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt)) {
+ error = virtqueue_enqueue_recv_refill_packed(vq,
+ new_pkts, free_cnt);
+ if (unlikely(error)) {
+ for (i = 0; i < free_cnt; i++)
+ rte_pktmbuf_free(new_pkts[i]);
+ }
+ nb_enqueued += free_cnt;
+ } else {
+ struct rte_eth_dev *dev =
+ &rte_eth_devices[rxvq->port_id];
+ dev->data->rx_mbuf_alloc_failed += free_cnt;
+ }
+ }
+
+ if (likely(nb_enqueued)) {
+ if (unlikely(virtqueue_kick_prepare_packed(vq))) {
+ virtqueue_notify(vq);
+ PMD_RX_LOG(DEBUG, "Notified");
+ }
+ }
+
+ return nb_rx;
+}
+
uint16_t
virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 56a77cc71..5b03f7a27 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -54,9 +54,36 @@ virtqueue_detach_unused(struct virtqueue *vq)
return NULL;
}
+/* Flush used descs */
+static void
+virtqueue_rxvq_flush_packed(struct virtqueue *vq)
+{
+ struct vq_desc_extra *dxp;
+ uint16_t i;
+
+ struct vring_packed_desc *descs = vq->ring_packed.desc_packed;
+ int cnt = 0;
+
+ i = vq->vq_used_cons_idx;
+ while (desc_is_used(&descs[i], vq) && cnt++ < vq->vq_nentries) {
+ dxp = &vq->vq_descx[descs[i].id];
+ if (dxp->cookie != NULL) {
+ rte_pktmbuf_free(dxp->cookie);
+ dxp->cookie = NULL;
+ }
+ vq->vq_free_cnt++;
+ vq->vq_used_cons_idx++;
+ if (vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx -= vq->vq_nentries;
+ vq->used_wrap_counter ^= 1;
+ }
+ i = vq->vq_used_cons_idx;
+ }
+}
+
/* Flush the elements in the used ring. */
-void
-virtqueue_rxvq_flush(struct virtqueue *vq)
+static void
+virtqueue_rxvq_flush_split(struct virtqueue *vq)
{
struct virtnet_rx *rxq = &vq->rxq;
struct virtio_hw *hw = vq->hw;
@@ -102,3 +129,15 @@ virtqueue_rxvq_flush(struct virtqueue *vq)
}
}
}
+
+/* Flush the elements in the used ring. */
+void
+virtqueue_rxvq_flush(struct virtqueue *vq)
+{
+ struct virtio_hw *hw = vq->hw;
+
+ if (vtpci_packed_queue(hw))
+ virtqueue_rxvq_flush_packed(vq);
+ else
+ virtqueue_rxvq_flush_split(vq);
+}
--
2.17.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v13 07/10] net/virtio: add virtio send command packed queue support
2018-12-14 15:59 [dpdk-dev] [PATCH v13 00/10] implement packed virtqueues Jens Freimann
` (5 preceding siblings ...)
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 06/10] net/virtio: implement receive " Jens Freimann
@ 2018-12-14 15:59 ` Jens Freimann
2018-12-17 16:48 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 08/10] net/virtio-user: add option to use packed queues Jens Freimann
` (2 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Jens Freimann @ 2018-12-14 15:59 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Use packed virtqueue format when reading and writing descriptors
to/from the ring.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
drivers/net/virtio/virtio_ethdev.c | 90 ++++++++++++++++++++++++++++++
1 file changed, 90 insertions(+)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 0394ac0af..5e7af51c9 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -141,6 +141,90 @@ static const struct rte_virtio_xstats_name_off rte_virtio_txq_stat_strings[] = {
struct virtio_hw_internal virtio_hw_internal[RTE_MAX_ETHPORTS];
+static struct virtio_pmd_ctrl *
+virtio_pq_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int pkt_num)
+{
+ struct virtqueue *vq = cvq->vq;
+ int head;
+ struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
+ struct virtio_pmd_ctrl *result;
+ int wrap_counter;
+ int sum = 0;
+ int k;
+
+ /*
+ * Format is enforced in qemu code:
+ * One TX packet for header;
+ * At least one TX packet per argument;
+ * One RX packet for ACK.
+ */
+ head = vq->vq_avail_idx;
+ wrap_counter = vq->avail_wrap_counter;
+ desc[head].flags = VRING_DESC_F_NEXT;
+ desc[head].addr = cvq->virtio_net_hdr_mem;
+ desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
+ vq->vq_free_cnt--;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->avail_wrap_counter ^= 1;
+ }
+
+ for (k = 0; k < pkt_num; k++) {
+ desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
+ + sizeof(struct virtio_net_ctrl_hdr)
+ + sizeof(ctrl->status) + sizeof(uint8_t) * sum;
+ desc[vq->vq_avail_idx].len = dlen[k];
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT;
+ sum += dlen[k];
+ vq->vq_free_cnt--;
+ _set_desc_avail(&desc[vq->vq_avail_idx],
+ vq->avail_wrap_counter);
+ rte_smp_wmb();
+ vq->vq_free_cnt--;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->avail_wrap_counter ^= 1;
+ }
+ }
+
+
+ desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
+ + sizeof(struct virtio_net_ctrl_hdr);
+ desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE;
+ _set_desc_avail(&desc[vq->vq_avail_idx],
+ vq->avail_wrap_counter);
+ _set_desc_avail(&desc[head], wrap_counter);
+ rte_smp_wmb();
+
+ vq->vq_free_cnt--;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->avail_wrap_counter ^= 1;
+ }
+
+ virtqueue_notify(vq);
+
+ /* wait for used descriptors in virtqueue */
+ do {
+ rte_rmb();
+ usleep(100);
+ } while (!desc_is_used(&desc[head], vq));
+
+ /* now get used descriptors */
+ while (desc_is_used(&desc[vq->vq_used_cons_idx], vq)) {
+ vq->vq_free_cnt++;
+ if (++vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx -= vq->vq_nentries;
+ vq->used_wrap_counter ^= 1;
+ }
+ }
+
+ result = cvq->virtio_net_hdr_mz->addr;
+ return result;
+}
+
static int
virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
int *dlen, int pkt_num)
@@ -174,6 +258,11 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
memcpy(cvq->virtio_net_hdr_mz->addr, ctrl,
sizeof(struct virtio_pmd_ctrl));
+ if (vtpci_packed_queue(vq->hw)) {
+ result = virtio_pq_send_command(cvq, ctrl, dlen, pkt_num);
+ goto out_unlock;
+ }
+
/*
* Format is enforced in qemu code:
* One TX packet for header;
@@ -245,6 +334,7 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
result = cvq->virtio_net_hdr_mz->addr;
+out_unlock:
rte_spinlock_unlock(&cvq->lock);
return result->status;
}
--
2.17.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v13 08/10] net/virtio-user: add option to use packed queues
2018-12-14 15:59 [dpdk-dev] [PATCH v13 00/10] implement packed virtqueues Jens Freimann
` (6 preceding siblings ...)
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 07/10] net/virtio: add virtio send command packed queue support Jens Freimann
@ 2018-12-14 15:59 ` Jens Freimann
2018-12-17 16:49 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 09/10] net/virtio-user: fail if q used with packed vq Jens Freimann
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 10/10] net/virtio: enable packed virtqueues by default Jens Freimann
9 siblings, 1 reply; 22+ messages in thread
From: Jens Freimann @ 2018-12-14 15:59 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
From: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add option to enable packed queue support for virtio-user
devices.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
.../net/virtio/virtio_user/virtio_user_dev.c | 18 ++++++++++++++----
.../net/virtio/virtio_user/virtio_user_dev.h | 2 +-
drivers/net/virtio/virtio_user_ethdev.c | 14 +++++++++++++-
3 files changed, 28 insertions(+), 6 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 20816c936..697ba4ae8 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -58,6 +58,8 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
state.index = queue_sel;
state.num = 0; /* no reservation */
+ if (dev->features & (1ULL << VIRTIO_F_RING_PACKED))
+ state.num |= (1 << 15);
dev->ops->send_request(dev, VHOST_USER_SET_VRING_BASE, &state);
dev->ops->send_request(dev, VHOST_USER_SET_VRING_ADDR, &addr);
@@ -407,12 +409,13 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
1ULL << VIRTIO_NET_F_GUEST_TSO4 | \
1ULL << VIRTIO_NET_F_GUEST_TSO6 | \
1ULL << VIRTIO_F_IN_ORDER | \
- 1ULL << VIRTIO_F_VERSION_1)
+ 1ULL << VIRTIO_F_VERSION_1 | \
+ 1ULL << VIRTIO_F_RING_PACKED)
int
virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
int cq, int queue_size, const char *mac, char **ifname,
- int mrg_rxbuf, int in_order)
+ int mrg_rxbuf, int in_order, int packed_vq)
{
pthread_mutex_init(&dev->mutex, NULL);
snprintf(dev->path, PATH_MAX, "%s", path);
@@ -464,10 +467,17 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
if (!in_order)
dev->unsupported_features |= (1ull << VIRTIO_F_IN_ORDER);
- if (dev->mac_specified)
- dev->frontend_features |= (1ull << VIRTIO_NET_F_MAC);
+ if (packed_vq)
+ dev->device_features |= (1ull << VIRTIO_F_RING_PACKED);
else
+ dev->device_features &= ~(1ull << VIRTIO_F_RING_PACKED);
+
+ if (dev->mac_specified) {
+ dev->device_features |= (1ull << VIRTIO_NET_F_MAC);
+ } else {
+ dev->device_features &= ~(1ull << VIRTIO_NET_F_MAC);
dev->unsupported_features |= (1ull << VIRTIO_NET_F_MAC);
+ }
if (cq) {
/* device does not really need to know anything about CQ,
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h
index c42ce5d4b..672a8161a 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
@@ -50,7 +50,7 @@ int virtio_user_start_device(struct virtio_user_dev *dev);
int virtio_user_stop_device(struct virtio_user_dev *dev);
int virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
int cq, int queue_size, const char *mac, char **ifname,
- int mrg_rxbuf, int in_order);
+ int mrg_rxbuf, int in_order, int packed_vq);
void virtio_user_dev_uninit(struct virtio_user_dev *dev);
void virtio_user_handle_cq(struct virtio_user_dev *dev, uint16_t queue_idx);
uint8_t virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs);
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index f8791391a..af2800605 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -361,6 +361,8 @@ static const char *valid_args[] = {
VIRTIO_USER_ARG_MRG_RXBUF,
#define VIRTIO_USER_ARG_IN_ORDER "in_order"
VIRTIO_USER_ARG_IN_ORDER,
+#define VIRTIO_USER_ARG_PACKED_VQ "packed_vq"
+ VIRTIO_USER_ARG_PACKED_VQ,
NULL
};
@@ -468,6 +470,7 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
char *ifname = NULL;
char *mac_addr = NULL;
int ret = -1;
+ uint64_t packed_vq = 0;
kvlist = rte_kvargs_parse(rte_vdev_device_args(dev), valid_args);
if (!kvlist) {
@@ -551,6 +554,15 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
cq = 1;
}
+ if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_PACKED_VQ) == 1) {
+ if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_PACKED_VQ,
+ &get_integer_arg, &packed_vq) < 0) {
+ PMD_INIT_LOG(ERR, "error to parse %s",
+ VIRTIO_USER_ARG_PACKED_VQ);
+ goto end;
+ }
+ }
+
if (queues > 1 && cq == 0) {
PMD_INIT_LOG(ERR, "multi-q requires ctrl-q");
goto end;
@@ -598,7 +610,7 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
vu_dev->is_server = false;
if (virtio_user_dev_init(hw->virtio_user_dev, path, queues, cq,
queue_size, mac_addr, &ifname, mrg_rxbuf,
- in_order) < 0) {
+ in_order, packed_vq) < 0) {
PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
virtio_user_eth_dev_free(eth_dev);
goto end;
--
2.17.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v13 09/10] net/virtio-user: fail if q used with packed vq
2018-12-14 15:59 [dpdk-dev] [PATCH v13 00/10] implement packed virtqueues Jens Freimann
` (7 preceding siblings ...)
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 08/10] net/virtio-user: add option to use packed queues Jens Freimann
@ 2018-12-14 15:59 ` Jens Freimann
2018-12-17 16:52 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 10/10] net/virtio: enable packed virtqueues by default Jens Freimann
9 siblings, 1 reply; 22+ messages in thread
From: Jens Freimann @ 2018-12-14 15:59 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Until we have support for ctrl virtqueues let's disable it and
fail device initalization if specified as a parameter.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
drivers/net/virtio/virtio_user/virtio_user_dev.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 697ba4ae8..14597eb73 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -467,10 +467,16 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
if (!in_order)
dev->unsupported_features |= (1ull << VIRTIO_F_IN_ORDER);
- if (packed_vq)
+ if (packed_vq) {
+ if (cq) {
+ PMD_INIT_LOG(ERR, "control vq not supported with "
+ "packed virtqueues\n");
+ return -1;
+ }
dev->device_features |= (1ull << VIRTIO_F_RING_PACKED);
- else
+ } else {
dev->device_features &= ~(1ull << VIRTIO_F_RING_PACKED);
+ }
if (dev->mac_specified) {
dev->device_features |= (1ull << VIRTIO_NET_F_MAC);
--
2.17.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v13 10/10] net/virtio: enable packed virtqueues by default
2018-12-14 15:59 [dpdk-dev] [PATCH v13 00/10] implement packed virtqueues Jens Freimann
` (8 preceding siblings ...)
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 09/10] net/virtio-user: fail if q used with packed vq Jens Freimann
@ 2018-12-14 15:59 ` Jens Freimann
2018-12-17 16:52 ` Maxime Coquelin
9 siblings, 1 reply; 22+ messages in thread
From: Jens Freimann @ 2018-12-14 15:59 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
drivers/net/virtio/virtio_ethdev.h | 1 +
drivers/net/virtio/virtio_user/virtio_user_dev.c | 3 ++-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index 88b8c42a3..364ecbb50 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -34,6 +34,7 @@
1u << VIRTIO_RING_F_INDIRECT_DESC | \
1ULL << VIRTIO_F_VERSION_1 | \
1ULL << VIRTIO_F_IN_ORDER | \
+ 1ULL << VIRTIO_F_RING_PACKED | \
1ULL << VIRTIO_F_IOMMU_PLATFORM)
#define VIRTIO_PMD_SUPPORTED_GUEST_FEATURES \
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 14597eb73..376622a6c 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -410,7 +410,8 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
1ULL << VIRTIO_NET_F_GUEST_TSO6 | \
1ULL << VIRTIO_F_IN_ORDER | \
1ULL << VIRTIO_F_VERSION_1 | \
- 1ULL << VIRTIO_F_RING_PACKED)
+ 1ULL << VIRTIO_F_RING_PACKED | \
+ 1ULL << VIRTIO_RING_F_EVENT_IDX)
int
virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
--
2.17.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v13 01/10] net/virtio: add packed virtqueue defines
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 01/10] net/virtio: add packed virtqueue defines Jens Freimann
@ 2018-12-17 15:45 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-12-17 15:45 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
On 12/14/18 4:59 PM, Jens Freimann wrote:
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> ---
> drivers/net/virtio/virtio_pci.h | 1 +
> drivers/net/virtio/virtio_ring.h | 30 ++++++++++++++++++++++++++++++
> drivers/net/virtio/virtqueue.h | 6 ++++++
> 3 files changed, 37 insertions(+)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v13 02/10] net/virtio: add packed virtqueue helpers
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 02/10] net/virtio: add packed virtqueue helpers Jens Freimann
@ 2018-12-17 16:09 ` Maxime Coquelin
2018-12-17 16:30 ` Maxime Coquelin
1 sibling, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-12-17 16:09 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
On 12/14/18 4:59 PM, Jens Freimann wrote:
> Add helper functions to set/clear and check descriptor flags.
>
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> ---
> drivers/net/virtio/virtio_pci.h | 6 +++
> drivers/net/virtio/virtqueue.h | 91 ++++++++++++++++++++++++++++++++-
> 2 files changed, 95 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
> index 4c975a531..b22b62dad 100644
> --- a/drivers/net/virtio/virtio_pci.h
> +++ b/drivers/net/virtio/virtio_pci.h
> @@ -315,6 +315,12 @@ vtpci_with_feature(struct virtio_hw *hw, uint64_t bit)
> return (hw->guest_features & (1ULL << bit)) != 0;
> }
>
> +static inline int
> +vtpci_packed_queue(struct virtio_hw *hw)
> +{
> + return vtpci_with_feature(hw, VIRTIO_F_RING_PACKED);
> +}
> +
> /*
> * Function declaration from virtio_pci.c
> */
> diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
> index d4e0858e4..19fabae0b 100644
> --- a/drivers/net/virtio/virtqueue.h
> +++ b/drivers/net/virtio/virtqueue.h
> @@ -251,6 +251,44 @@ struct virtio_tx_region {
> __attribute__((__aligned__(16)));
> };
>
> +static inline void
> +_set_desc_avail(struct vring_packed_desc *desc, int wrap_counter)
> +{
> + desc->flags |= VRING_DESC_F_AVAIL(wrap_counter) |
> + VRING_DESC_F_USED(!wrap_counter);
> +}
> +
> +static inline void
> +set_desc_avail(struct virtqueue *vq, struct vring_packed_desc *desc)
> +{
> + _set_desc_avail(desc, vq->avail_wrap_counter);
> +}
> +
> +static inline int
> +desc_is_used(struct vring_packed_desc *desc, struct virtqueue *vq)
> +{
> + uint16_t used, avail, flags;
> +
> + flags = desc->flags;
> + used = !!(flags & VRING_DESC_F_USED(1));
> + avail = !!(flags & VRING_DESC_F_AVAIL(1));
> +
> + return avail == used && used == vq->used_wrap_counter;
> +}
> +
> +
> +static inline void
> +vring_desc_init_packed(struct virtqueue *vq, int n)
> +{
> + int i;
> + for (i = 0; i < n - 1; i++) {
> + vq->ring_packed.desc_packed[i].id = i;
> + vq->vq_descx[i].next = i + 1;
> + }
> + vq->ring_packed.desc_packed[i].id = i;
> + vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END;
> +}
> +
> /* Chain all the descriptors in the ring with an END */
> static inline void
> vring_desc_init(struct vring_desc *dp, uint16_t n)
> @@ -262,13 +300,59 @@ vring_desc_init(struct vring_desc *dp, uint16_t n)
> dp[i].next = VQ_RING_DESC_CHAIN_END;
> }
>
> +/**
> + * Tell the backend not to interrupt us.
> + */
> +static inline void
> +virtqueue_disable_intr_packed(struct virtqueue *vq)
> +{
> + uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags;
> +
> + if (*event_flags != RING_EVENT_FLAGS_DISABLE)
> + *event_flags = RING_EVENT_FLAGS_DISABLE;
Why not just setting the value unconditionally?
IIUC, it is not called in the hot path?
> +}
> +
> +
> /**
> * Tell the backend not to interrupt us.
> */
> static inline void
> virtqueue_disable_intr(struct virtqueue *vq)
> {
> - vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
> + if (vtpci_packed_queue(vq->hw))
> + virtqueue_disable_intr_packed(vq);
> + else
> + vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
> +}
> +
> +/**
> + * Tell the backend to interrupt. Implementation for packed virtqueues.
> + */
> +static inline void
> +virtqueue_enable_intr_packed(struct virtqueue *vq)
> +{
> + uint16_t *off_wrap = &vq->ring_packed.driver_event->desc_event_off_wrap;
> + uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags;
> +
> + *off_wrap = vq->vq_used_cons_idx |
> + ((uint16_t)(vq->used_wrap_counter << 15));
Don't we want to set off_wrap only if VIRTIO_RING_F_EVENT_IDX
negotiated?
> +
> + if (vq->event_flags_shadow == RING_EVENT_FLAGS_DISABLE) {
> + virtio_wmb();
> + vq->event_flags_shadow =
> + vtpci_with_feature(vq->hw, VIRTIO_RING_F_EVENT_IDX) ?
> + RING_EVENT_FLAGS_DESC : RING_EVENT_FLAGS_ENABLE;
> + *event_flags = vq->event_flags_shadow;
> + }
> +}
> +
> +/**
> + * Tell the backend to interrupt. Implementation for split virtqueues.
> + */
> +static inline void
> +virtqueue_enable_intr_split(struct virtqueue *vq)
> +{
> + vq->vq_ring.avail->flags &= (~VRING_AVAIL_F_NO_INTERRUPT);
> }
>
> /**
> @@ -277,7 +361,10 @@ virtqueue_disable_intr(struct virtqueue *vq)
> static inline void
> virtqueue_enable_intr(struct virtqueue *vq)
> {
> - vq->vq_ring.avail->flags &= (~VRING_AVAIL_F_NO_INTERRUPT);
> + if (vtpci_packed_queue(vq->hw))
> + virtqueue_enable_intr_packed(vq);
> + else
> + virtqueue_enable_intr_split(vq);
> }
>
> /**
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v13 03/10] net/virtio: vring init for packed queues
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 03/10] net/virtio: vring init for packed queues Jens Freimann
@ 2018-12-17 16:15 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-12-17 16:15 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
On 12/14/18 4:59 PM, Jens Freimann wrote:
> Add and initialize descriptor data structures.
>
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 32 ++++++++++++++++++----------
> drivers/net/virtio/virtio_ring.h | 34 ++++++++++++++++++++++++------
> drivers/net/virtio/virtqueue.h | 2 +-
> 3 files changed, 49 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index cb2b2e0bf..e6ba1282b 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -299,20 +299,22 @@ virtio_init_vring(struct virtqueue *vq)
>
> PMD_INIT_FUNC_TRACE();
>
> - /*
> - * Reinitialise since virtio port might have been stopped and restarted
> - */
> memset(ring_mem, 0, vq->vq_ring_size);
> - vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
> +
> vq->vq_used_cons_idx = 0;
> vq->vq_desc_head_idx = 0;
> vq->vq_avail_idx = 0;
> vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
> vq->vq_free_cnt = vq->vq_nentries;
> memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
> -
> - vring_desc_init(vr->desc, size);
> -
> + if (vtpci_packed_queue(vq->hw)) {
> + vring_init_packed(&vq->ring_packed, ring_mem,
> + VIRTIO_PCI_VRING_ALIGN, size);
> + vring_desc_init_packed(vq, size);
> + } else {
> + vring_init_split(vr, ring_mem, VIRTIO_PCI_VRING_ALIGN, size);
> + vring_desc_init_split(vr->desc, size);
> + }
> /*
> * Disable device(host) interrupting guest
> */
> @@ -384,11 +386,16 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
> vq->hw = hw;
> vq->vq_queue_index = vtpci_queue_idx;
> vq->vq_nentries = vq_size;
> + vq->event_flags_shadow = 0;
> + if (vtpci_packed_queue(hw)) {
> + vq->avail_wrap_counter = 1;
> + vq->used_wrap_counter = 1;
> + }
>
> /*
> * Reserve a memzone for vring elements
> */
> - size = vring_size(vq_size, VIRTIO_PCI_VRING_ALIGN);
> + size = vring_size(hw, vq_size, VIRTIO_PCI_VRING_ALIGN);
> vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
> PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d",
> size, vq->vq_ring_size);
> @@ -491,7 +498,8 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
> for (i = 0; i < vq_size; i++) {
> struct vring_desc *start_dp = txr[i].tx_indir;
>
> - vring_desc_init(start_dp, RTE_DIM(txr[i].tx_indir));
> + vring_desc_init_split(start_dp,
> + RTE_DIM(txr[i].tx_indir));
>
> /* first indirect descriptor is always the tx header */
> start_dp->addr = txvq->virtio_net_hdr_mem
> @@ -1488,7 +1496,8 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
>
> /* Setting up rx_header size for the device */
> if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF) ||
> - vtpci_with_feature(hw, VIRTIO_F_VERSION_1))
> + vtpci_with_feature(hw, VIRTIO_F_VERSION_1) ||
> + vtpci_with_feature(hw, VIRTIO_F_RING_PACKED))
> hw->vtnet_hdr_size = sizeof(struct virtio_net_hdr_mrg_rxbuf);
> else
> hw->vtnet_hdr_size = sizeof(struct virtio_net_hdr);
> @@ -1908,7 +1917,8 @@ virtio_dev_configure(struct rte_eth_dev *dev)
>
> if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) {
> hw->use_inorder_tx = 1;
> - if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
> + if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF) &&
> + !vtpci_packed_queue(hw)) {
> hw->use_inorder_rx = 1;
> hw->use_simple_rx = 0;
> } else {
> diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
> index 464449074..9ab007c11 100644
> --- a/drivers/net/virtio/virtio_ring.h
> +++ b/drivers/net/virtio/virtio_ring.h
> @@ -86,7 +86,7 @@ struct vring_packed {
>
> struct vring {
> unsigned int num;
> - struct vring_desc *desc;
> + struct vring_desc *desc;
Unrelated change, please revert back.
> struct vring_avail *avail;
> struct vring_used *used;
> };
> @@ -125,10 +125,18 @@ struct vring {
> #define vring_avail_event(vr) (*(uint16_t *)&(vr)->used->ring[(vr)->num])
>
> static inline size_t
> -vring_size(unsigned int num, unsigned long align)
> +vring_size(struct virtio_hw *hw, unsigned int num, unsigned long align)
> {
> size_t size;
>
> + if (vtpci_packed_queue(hw)) {
> + size = num * sizeof(struct vring_packed_desc);
> + size += sizeof(struct vring_packed_desc_event);
> + size = RTE_ALIGN_CEIL(size, align);
> + size += sizeof(struct vring_packed_desc_event);
> + return size;
> + }
> +
> size = num * sizeof(struct vring_desc);
> size += sizeof(struct vring_avail) + (num * sizeof(uint16_t));
> size = RTE_ALIGN_CEIL(size, align);
> @@ -136,17 +144,29 @@ vring_size(unsigned int num, unsigned long align)
> (num * sizeof(struct vring_used_elem));
> return size;
> }
> -
> static inline void
> -vring_init(struct vring *vr, unsigned int num, uint8_t *p,
> - unsigned long align)
> +vring_init_split(struct vring *vr, uint8_t *p, unsigned long align,
> + unsigned int num)
> {
> vr->num = num;
> vr->desc = (struct vring_desc *) p;
> vr->avail = (struct vring_avail *) (p +
> - num * sizeof(struct vring_desc));
> + vr->num * sizeof(struct vring_desc));
> vr->used = (void *)
> - RTE_ALIGN_CEIL((uintptr_t)(&vr->avail->ring[num]), align);
> + RTE_ALIGN_CEIL((uintptr_t)(&vr->avail->ring[vr->num]), align);
> +}
Not a big deal, but it is unrelated to patch purpose.
> +
> +static inline void
> +vring_init_packed(struct vring_packed *vr, uint8_t *p, unsigned long align,
> + unsigned int num)
> +{
> + vr->num = num;
> + vr->desc_packed = (struct vring_packed_desc *)p;
> + vr->driver_event = (struct vring_packed_desc_event *)(p +
> + vr->num * sizeof(struct vring_packed_desc));
> + vr->device_event = (struct vring_packed_desc_event *)
> + RTE_ALIGN_CEIL((uintptr_t)(vr->driver_event +
> + sizeof(struct vring_packed_desc_event)), align);
> }
>
> /*
> diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
> index 19fabae0b..809d15879 100644
> --- a/drivers/net/virtio/virtqueue.h
> +++ b/drivers/net/virtio/virtqueue.h
> @@ -291,7 +291,7 @@ vring_desc_init_packed(struct virtqueue *vq, int n)
>
> /* Chain all the descriptors in the ring with an END */
> static inline void
> -vring_desc_init(struct vring_desc *dp, uint16_t n)
> +vring_desc_init_split(struct vring_desc *dp, uint16_t n)
> {
> uint16_t i;
>
>
With above minor comments fixed:
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v13 02/10] net/virtio: add packed virtqueue helpers
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 02/10] net/virtio: add packed virtqueue helpers Jens Freimann
2018-12-17 16:09 ` Maxime Coquelin
@ 2018-12-17 16:30 ` Maxime Coquelin
1 sibling, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-12-17 16:30 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
On 12/14/18 4:59 PM, Jens Freimann wrote:
> +static inline void
> +_set_desc_avail(struct vring_packed_desc *desc, int wrap_counter)
> +{
> + desc->flags |= VRING_DESC_F_AVAIL(wrap_counter) |
> + VRING_DESC_F_USED(!wrap_counter);
> +}
> +
> +static inline void
> +set_desc_avail(struct virtqueue *vq, struct vring_packed_desc *desc)
> +{
> + _set_desc_avail(desc, vq->avail_wrap_counter);
> +}
> +
I wonder whether these helpers are really needed, as they are no more
used in virtio_rxtx.c.
Only _set_desc_avail() is used for the ctrl vq.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v13 05/10] net/virtio: implement transmit path for packed queues
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 05/10] net/virtio: implement transmit path for packed queues Jens Freimann
@ 2018-12-17 16:35 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-12-17 16:35 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
On 12/14/18 4:59 PM, Jens Freimann wrote:
> This implements the transmit path for devices with
> support for packed virtqueues.
>
> Signed-off-by: Jens Freiman <jfreimann@redhat.com>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 57 ++++---
> drivers/net/virtio/virtio_ethdev.h | 2 +
> drivers/net/virtio/virtio_rxtx.c | 236 ++++++++++++++++++++++++++++-
> drivers/net/virtio/virtqueue.h | 20 ++-
> 4 files changed, 293 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index e6ba1282b..9f1b72e56 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -390,6 +390,9 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
> if (vtpci_packed_queue(hw)) {
> vq->avail_wrap_counter = 1;
> vq->used_wrap_counter = 1;
> + vq->avail_used_flags =
> + VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
> + VRING_DESC_F_USED(!vq->avail_wrap_counter);
> }
>
> /*
> @@ -497,17 +500,26 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
> memset(txr, 0, vq_size * sizeof(*txr));
> for (i = 0; i < vq_size; i++) {
> struct vring_desc *start_dp = txr[i].tx_indir;
> -
> - vring_desc_init_split(start_dp,
> - RTE_DIM(txr[i].tx_indir));
> + struct vring_packed_desc *start_dp_packed =
> + txr[i].tx_indir_pq;
>
> /* first indirect descriptor is always the tx header */
> - start_dp->addr = txvq->virtio_net_hdr_mem
> - + i * sizeof(*txr)
> - + offsetof(struct virtio_tx_region, tx_hdr);
> -
> - start_dp->len = hw->vtnet_hdr_size;
> - start_dp->flags = VRING_DESC_F_NEXT;
> + if (vtpci_packed_queue(hw)) {
> + start_dp_packed->addr = txvq->virtio_net_hdr_mem
> + + i * sizeof(*txr)
> + + offsetof(struct virtio_tx_region,
> + tx_hdr);
> + start_dp_packed->len = hw->vtnet_hdr_size;
> + } else {
> + vring_desc_init_split(start_dp,
> + RTE_DIM(txr[i].tx_indir));
> + start_dp->addr = txvq->virtio_net_hdr_mem
> + + i * sizeof(*txr)
> + + offsetof(struct virtio_tx_region,
> + tx_hdr);
> + start_dp->len = hw->vtnet_hdr_size;
> + start_dp->flags = VRING_DESC_F_NEXT;
> + }
> }
> }
>
> @@ -1336,6 +1348,23 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
> {
> struct virtio_hw *hw = eth_dev->data->dev_private;
>
> + if (vtpci_packed_queue(hw)) {
> + PMD_INIT_LOG(INFO,
> + "virtio: using packed ring standard Tx path on port %u",
> + eth_dev->data->port_id);
> + eth_dev->tx_pkt_burst = virtio_xmit_pkts_packed;
> + } else {
> + if (hw->use_inorder_tx) {
> + PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
> + eth_dev->data->port_id);
> + eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
> + } else {
> + PMD_INIT_LOG(INFO, "virtio: using standard Tx path on port %u",
> + eth_dev->data->port_id);
> + eth_dev->tx_pkt_burst = virtio_xmit_pkts;
> + }
> + }
> +
> if (hw->use_simple_rx) {
> PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u",
> eth_dev->data->port_id);
> @@ -1356,15 +1385,7 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
> eth_dev->rx_pkt_burst = &virtio_recv_pkts;
> }
>
> - if (hw->use_inorder_tx) {
> - PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
> - eth_dev->data->port_id);
> - eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
> - } else {
> - PMD_INIT_LOG(INFO, "virtio: using standard Tx path on port %u",
> - eth_dev->data->port_id);
> - eth_dev->tx_pkt_burst = virtio_xmit_pkts;
> - }
> +
Trailing new line?
Apart from that, it looks good to me:
Reviewed-by: Maxime coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v13 06/10] net/virtio: implement receive path for packed queues
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 06/10] net/virtio: implement receive " Jens Freimann
@ 2018-12-17 16:46 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-12-17 16:46 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
On 12/14/18 4:59 PM, Jens Freimann wrote:
> Implement the receive part.
>
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 58 +++--
> drivers/net/virtio/virtio_ethdev.h | 5 +
> drivers/net/virtio/virtio_rxtx.c | 375 ++++++++++++++++++++++++++++-
> drivers/net/virtio/virtqueue.c | 43 +++-
> 4 files changed, 457 insertions(+), 24 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v13 07/10] net/virtio: add virtio send command packed queue support
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 07/10] net/virtio: add virtio send command packed queue support Jens Freimann
@ 2018-12-17 16:48 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-12-17 16:48 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
On 12/14/18 4:59 PM, Jens Freimann wrote:
> Use packed virtqueue format when reading and writing descriptors
> to/from the ring.
>
> Signed-off-by: Jens Freimann<jfreimann@redhat.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 90 ++++++++++++++++++++++++++++++
> 1 file changed, 90 insertions(+)
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v13 08/10] net/virtio-user: add option to use packed queues
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 08/10] net/virtio-user: add option to use packed queues Jens Freimann
@ 2018-12-17 16:49 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-12-17 16:49 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
On 12/14/18 4:59 PM, Jens Freimann wrote:
> From: Yuanhan Liu <yuanhan.liu@linux.intel.com>
>
> Add option to enable packed queue support for virtio-user
> devices.
>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
> .../net/virtio/virtio_user/virtio_user_dev.c | 18 ++++++++++++++----
> .../net/virtio/virtio_user/virtio_user_dev.h | 2 +-
> drivers/net/virtio/virtio_user_ethdev.c | 14 +++++++++++++-
> 3 files changed, 28 insertions(+), 6 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v13 09/10] net/virtio-user: fail if q used with packed vq
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 09/10] net/virtio-user: fail if q used with packed vq Jens Freimann
@ 2018-12-17 16:52 ` Maxime Coquelin
2018-12-17 19:27 ` Jens Freimann
0 siblings, 1 reply; 22+ messages in thread
From: Maxime Coquelin @ 2018-12-17 16:52 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
I think you meant control queue in the title?
On 12/14/18 4:59 PM, Jens Freimann wrote:
> Until we have support for ctrl virtqueues let's disable it and
> fail device initalization if specified as a parameter.
>
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> ---
> drivers/net/virtio/virtio_user/virtio_user_dev.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 697ba4ae8..14597eb73 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -467,10 +467,16 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
> if (!in_order)
> dev->unsupported_features |= (1ull << VIRTIO_F_IN_ORDER);
>
> - if (packed_vq)
> + if (packed_vq) {
> + if (cq) {
> + PMD_INIT_LOG(ERR, "control vq not supported with "
Maybe change to "not supported *yet*".
> + "packed virtqueues\n");
> + return -1;
> + }
> dev->device_features |= (1ull << VIRTIO_F_RING_PACKED);
> - else
> + } else {
> dev->device_features &= ~(1ull << VIRTIO_F_RING_PACKED);
> + }
>
> if (dev->mac_specified) {
> dev->device_features |= (1ull << VIRTIO_NET_F_MAC);
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v13 10/10] net/virtio: enable packed virtqueues by default
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 10/10] net/virtio: enable packed virtqueues by default Jens Freimann
@ 2018-12-17 16:52 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-12-17 16:52 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
On 12/14/18 4:59 PM, Jens Freimann wrote:
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> ---
> drivers/net/virtio/virtio_ethdev.h | 1 +
> drivers/net/virtio/virtio_user/virtio_user_dev.c | 3 ++-
> 2 files changed, 3 insertions(+), 1 deletion(-)
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v13 09/10] net/virtio-user: fail if q used with packed vq
2018-12-17 16:52 ` Maxime Coquelin
@ 2018-12-17 19:27 ` Jens Freimann
0 siblings, 0 replies; 22+ messages in thread
From: Jens Freimann @ 2018-12-17 19:27 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: dev, tiwei.bie, Gavin.Hu
On Mon, Dec 17, 2018 at 05:52:08PM +0100, Maxime Coquelin wrote:
>I think you meant control queue in the title?
>
>On 12/14/18 4:59 PM, Jens Freimann wrote:
>>Until we have support for ctrl virtqueues let's disable it and
>>fail device initalization if specified as a parameter.
>>
>>Signed-off-by: Jens Freimann <jfreimann@redhat.com>
>>---
>> drivers/net/virtio/virtio_user/virtio_user_dev.c | 10 ++++++++--
>> 1 file changed, 8 insertions(+), 2 deletions(-)
>>
>>diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
>>index 697ba4ae8..14597eb73 100644
>>--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
>>+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
>>@@ -467,10 +467,16 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
>> if (!in_order)
>> dev->unsupported_features |= (1ull << VIRTIO_F_IN_ORDER);
>>- if (packed_vq)
>>+ if (packed_vq) {
>>+ if (cq) {
>>+ PMD_INIT_LOG(ERR, "control vq not supported with "
>
>Maybe change to "not supported *yet*".
sure, no problem.
regards,
Jens
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2018-12-17 19:27 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-14 15:59 [dpdk-dev] [PATCH v13 00/10] implement packed virtqueues Jens Freimann
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 01/10] net/virtio: add packed virtqueue defines Jens Freimann
2018-12-17 15:45 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 02/10] net/virtio: add packed virtqueue helpers Jens Freimann
2018-12-17 16:09 ` Maxime Coquelin
2018-12-17 16:30 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 03/10] net/virtio: vring init for packed queues Jens Freimann
2018-12-17 16:15 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 04/10] net/virtio: dump packed virtqueue data Jens Freimann
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 05/10] net/virtio: implement transmit path for packed queues Jens Freimann
2018-12-17 16:35 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 06/10] net/virtio: implement receive " Jens Freimann
2018-12-17 16:46 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 07/10] net/virtio: add virtio send command packed queue support Jens Freimann
2018-12-17 16:48 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 08/10] net/virtio-user: add option to use packed queues Jens Freimann
2018-12-17 16:49 ` Maxime Coquelin
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 09/10] net/virtio-user: fail if q used with packed vq Jens Freimann
2018-12-17 16:52 ` Maxime Coquelin
2018-12-17 19:27 ` Jens Freimann
2018-12-14 15:59 ` [dpdk-dev] [PATCH v13 10/10] net/virtio: enable packed virtqueues by default Jens Freimann
2018-12-17 16:52 ` Maxime Coquelin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).