* [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues
@ 2018-10-03 13:11 Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 1/8] net/virtio: vring init for packed queues Jens Freimann
` (9 more replies)
0 siblings, 10 replies; 22+ messages in thread
From: Jens Freimann @ 2018-10-03 13:11 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
I'm sending this out to get some comments especially on the TX path code.
I added support for mergeable rx buffers, out of order processing and
indirect. The receive path works well but the TX path sometimes locks up
after a random number of packets transmitted, so please review this
patch extra careful. This is also why I didn't add any new performance
numbers to the cover letter yet.
To support out-of-order processing I used the vq_desc_extra struct to
add a .next field and use it as a list for managing descriptors. This
seemed to add less complexity to the code than adding a new data
structure to use as a list for packed queue descriptors.
I also took out the patch for supporting virtio-user as it turned out
more complex than expected. I will try to get it working for the next
version, but if I don't can we add it in a later pach set (and then
probably not in 18.11?]
This is a basic implementation of packed virtqueues as specified in the
Virtio 1.1 draft. A compiled version of the current draft is available
at https://github.com/oasis-tcs/virtio-docs.git (or as .pdf at
https://github.com/oasis-tcs/virtio-docs/blob/master/virtio-v1.1-packed-wd10.pdf
A packed virtqueue is different from a split virtqueue in that it
consists of only a single descriptor ring that replaces available and
used ring, index and descriptor pointers.
Each descriptor is readable and writable and has a flags field. These flags
will mark if a descriptor is available or used. To detect new available descriptors
even after the ring has wrapped, device and driver each have a
single-bit wrap counter that is flipped from 0 to 1 and vice versa every time
the last descriptor in the ring is used/made available.
Jens Freimann (8):
net/virtio: vring init for packed queues
net/virtio: add packed virtqueue defines
net/virtio: add packed virtqueue helpers
net/virtio: dump packed virtqueue data
net/virtio: implement transmit path for packed queues
net/virtio: implement receive path for packed queues
net/virtio: add virtio send command packed queue support
net/virtio: enable packed virtqueues by default
drivers/net/virtio/virtio_ethdev.c | 161 +++++++--
drivers/net/virtio/virtio_ethdev.h | 5 +
drivers/net/virtio/virtio_pci.h | 8 +
drivers/net/virtio/virtio_ring.h | 96 ++++-
drivers/net/virtio/virtio_rxtx.c | 544 ++++++++++++++++++++++++++++-
drivers/net/virtio/virtqueue.c | 23 ++
drivers/net/virtio/virtqueue.h | 52 ++-
7 files changed, 846 insertions(+), 43 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v7 1/8] net/virtio: vring init for packed queues
2018-10-03 13:11 [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
@ 2018-10-03 13:11 ` Jens Freimann
2018-10-04 11:54 ` Maxime Coquelin
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 2/8] net/virtio: add packed virtqueue defines Jens Freimann
` (8 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Jens Freimann @ 2018-10-03 13:11 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Add and initialize descriptor data structures.
To allow out of order processing a .next field was added to
struct vq_desc_extra because there is none in the packed virtqueue
descriptor itself. This is used to chain descriptors and process them
similiar to how it is handled for split virtqueues.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
drivers/net/virtio/virtio_ethdev.c | 28 +++++++++------
drivers/net/virtio/virtio_pci.h | 8 +++++
drivers/net/virtio/virtio_ring.h | 55 +++++++++++++++++++++++++++---
drivers/net/virtio/virtqueue.h | 13 ++++++-
4 files changed, 88 insertions(+), 16 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index b81df0a99..d6a1613dd 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -299,19 +299,27 @@ virtio_init_vring(struct virtqueue *vq)
PMD_INIT_FUNC_TRACE();
- /*
- * Reinitialise since virtio port might have been stopped and restarted
- */
memset(ring_mem, 0, vq->vq_ring_size);
- vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
+ vring_init(vq->hw, vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
+
+ vq->vq_free_cnt = vq->vq_nentries;
+ memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
vq->vq_used_cons_idx = 0;
+ vq->vq_avail_idx = 0;
vq->vq_desc_head_idx = 0;
- vq->vq_avail_idx = 0;
vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
- vq->vq_free_cnt = vq->vq_nentries;
- memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
+ if (vtpci_packed_queue(vq->hw)) {
+ uint16_t i;
+ for(i = 0; i < size - 1; i++) {
+ vq->vq_descx[i].next = i + 1;
+ vq->vq_ring.desc_packed[i].index = i;
+ }
+ vq->vq_ring.desc_packed[i].index = i;
+ vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END;
+ } else {
- vring_desc_init(vr->desc, size);
+ vring_desc_init_split(vr->desc, size);
+ }
/*
* Disable device(host) interrupting guest
@@ -386,7 +394,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
/*
* Reserve a memzone for vring elements
*/
- size = vring_size(vq_size, VIRTIO_PCI_VRING_ALIGN);
+ size = vring_size(hw, vq_size, VIRTIO_PCI_VRING_ALIGN);
vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d",
size, vq->vq_ring_size);
@@ -489,7 +497,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
for (i = 0; i < vq_size; i++) {
struct vring_desc *start_dp = txr[i].tx_indir;
- vring_desc_init(start_dp, RTE_DIM(txr[i].tx_indir));
+ vring_desc_init_split(start_dp, RTE_DIM(txr[i].tx_indir));
/* first indirect descriptor is always the tx header */
start_dp->addr = txvq->virtio_net_hdr_mem
diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
index 58fdd3d45..90204d281 100644
--- a/drivers/net/virtio/virtio_pci.h
+++ b/drivers/net/virtio/virtio_pci.h
@@ -113,6 +113,8 @@ struct virtnet_ctl;
#define VIRTIO_F_VERSION_1 32
#define VIRTIO_F_IOMMU_PLATFORM 33
+#define VIRTIO_F_RING_PACKED 34
+#define VIRTIO_F_IN_ORDER 35
/*
* Some VirtIO feature bits (currently bits 28 through 31) are
@@ -314,6 +316,12 @@ vtpci_with_feature(struct virtio_hw *hw, uint64_t bit)
return (hw->guest_features & (1ULL << bit)) != 0;
}
+static inline int
+vtpci_packed_queue(struct virtio_hw *hw)
+{
+ return vtpci_with_feature(hw, VIRTIO_F_RING_PACKED);
+}
+
/*
* Function declaration from virtio_pci.c
*/
diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index 9e3c2a015..309069fdb 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -54,11 +54,38 @@ struct vring_used {
struct vring_used_elem ring[0];
};
+/* For support of packed virtqueues in Virtio 1.1 the format of descriptors
+ * looks like this.
+ */
+struct vring_desc_packed {
+ uint64_t addr;
+ uint32_t len;
+ uint16_t index;
+ uint16_t flags;
+} __attribute__ ((aligned(16)));
+
+#define RING_EVENT_FLAGS_ENABLE 0x0
+#define RING_EVENT_FLAGS_DISABLE 0x1
+#define RING_EVENT_FLAGS_DESC 0x2
+struct vring_packed_desc_event {
+ uint16_t desc_event_off_wrap;
+ uint16_t desc_event_flags;
+};
+
struct vring {
unsigned int num;
- struct vring_desc *desc;
- struct vring_avail *avail;
- struct vring_used *used;
+ union {
+ struct vring_desc_packed *desc_packed;
+ struct vring_desc *desc;
+ };
+ union {
+ struct vring_avail *avail;
+ struct vring_packed_desc_event *driver_event;
+ };
+ union {
+ struct vring_used *used;
+ struct vring_packed_desc_event *device_event;
+ };
};
/* The standard layout for the ring is a continuous chunk of memory which
@@ -95,10 +122,18 @@ struct vring {
#define vring_avail_event(vr) (*(uint16_t *)&(vr)->used->ring[(vr)->num])
static inline size_t
-vring_size(unsigned int num, unsigned long align)
+vring_size(struct virtio_hw *hw, unsigned int num, unsigned long align)
{
size_t size;
+ if (vtpci_packed_queue(hw)) {
+ size = num * sizeof(struct vring_desc_packed);
+ size += sizeof(struct vring_packed_desc_event);
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct vring_packed_desc_event);
+ return size;
+ }
+
size = num * sizeof(struct vring_desc);
size += sizeof(struct vring_avail) + (num * sizeof(uint16_t));
size = RTE_ALIGN_CEIL(size, align);
@@ -108,10 +143,20 @@ vring_size(unsigned int num, unsigned long align)
}
static inline void
-vring_init(struct vring *vr, unsigned int num, uint8_t *p,
+vring_init(struct virtio_hw *hw, struct vring *vr, unsigned int num, uint8_t *p,
unsigned long align)
{
vr->num = num;
+ if (vtpci_packed_queue(hw)) {
+ vr->desc_packed = (struct vring_desc_packed *)p;
+ vr->driver_event = (struct vring_packed_desc_event *)(p +
+ num * sizeof(struct vring_desc_packed));
+ vr->device_event = (struct vring_packed_desc_event *)
+ RTE_ALIGN_CEIL((uintptr_t)(vr->driver_event +
+ sizeof(struct vring_packed_desc_event)), align);
+ return;
+ }
+
vr->desc = (struct vring_desc *) p;
vr->avail = (struct vring_avail *) (p +
num * sizeof(struct vring_desc));
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 26518ed98..6a4f92b79 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -161,6 +161,7 @@ struct virtio_pmd_ctrl {
struct vq_desc_extra {
void *cookie;
uint16_t ndescs;
+ uint16_t next;
};
struct virtqueue {
@@ -245,9 +246,19 @@ struct virtio_tx_region {
__attribute__((__aligned__(16)));
};
+static inline void
+vring_desc_init_packed(struct vring *vr, int n)
+{
+ int i;
+ for (i = 0; i < n; i++) {
+ struct vring_desc_packed *desc = &vr->desc_packed[i];
+ desc->index = i;
+ }
+}
+
/* Chain all the descriptors in the ring with an END */
static inline void
-vring_desc_init(struct vring_desc *dp, uint16_t n)
+vring_desc_init_split(struct vring_desc *dp, uint16_t n)
{
uint16_t i;
--
2.17.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v7 2/8] net/virtio: add packed virtqueue defines
2018-10-03 13:11 [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 1/8] net/virtio: vring init for packed queues Jens Freimann
@ 2018-10-03 13:11 ` Jens Freimann
2018-10-04 11:54 ` Maxime Coquelin
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 3/8] net/virtio: add packed virtqueue helpers Jens Freimann
` (7 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Jens Freimann @ 2018-10-03 13:11 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
drivers/net/virtio/virtio_ring.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index 309069fdb..36a65f9b3 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -15,7 +15,11 @@
#define VRING_DESC_F_WRITE 2
/* This means the buffer contains a list of buffer descriptors. */
#define VRING_DESC_F_INDIRECT 4
+/* This flag means the descriptor was made available by the driver */
+#define VRING_DESC_F_AVAIL(b) ((uint16_t)(b) << 7)
+/* This flag means the descriptor was used by the device */
+#define VRING_DESC_F_USED(b) ((uint16_t)(b) << 15)
/* The Host uses this in used->flags to advise the Guest: don't kick me
* when you add a buffer. It's unreliable, so it's simply an
* optimization. Guest will still kick if it's out of buffers. */
--
2.17.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v7 3/8] net/virtio: add packed virtqueue helpers
2018-10-03 13:11 [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 1/8] net/virtio: vring init for packed queues Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 2/8] net/virtio: add packed virtqueue defines Jens Freimann
@ 2018-10-03 13:11 ` Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 4/8] net/virtio: dump packed virtqueue data Jens Freimann
` (6 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Jens Freimann @ 2018-10-03 13:11 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Add helper functions to set/clear and check descriptor flags.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
drivers/net/virtio/virtio_ring.h | 26 ++++++++++++++++++++++++++
drivers/net/virtio/virtqueue.h | 11 +++++++++++
2 files changed, 37 insertions(+)
diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index 36a65f9b3..b9e63d4d4 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -78,6 +78,8 @@ struct vring_packed_desc_event {
struct vring {
unsigned int num;
+ unsigned int avail_wrap_counter;
+ unsigned int used_wrap_counter;
union {
struct vring_desc_packed *desc_packed;
struct vring_desc *desc;
@@ -92,6 +94,30 @@ struct vring {
};
};
+static inline void
+_set_desc_avail(struct vring_desc_packed *desc, int wrap_counter)
+{
+ desc->flags |= VRING_DESC_F_AVAIL(wrap_counter) |
+ VRING_DESC_F_USED(!wrap_counter);
+}
+
+static inline void
+set_desc_avail(struct vring *vr, struct vring_desc_packed *desc)
+{
+ _set_desc_avail(desc, vr->avail_wrap_counter);
+}
+
+static inline int
+desc_is_used(struct vring_desc_packed *desc, struct vring *vr)
+{
+ uint16_t used, avail;
+
+ used = !!(desc->flags & VRING_DESC_F_USED(1));
+ avail = !!(desc->flags & VRING_DESC_F_AVAIL(1));
+
+ return used == avail && used == vr->used_wrap_counter;
+}
+
/* The standard layout for the ring is a continuous chunk of memory which
* looks like this. We assume num is a power of 2.
*
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 6a4f92b79..b55ace958 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -246,6 +246,17 @@ struct virtio_tx_region {
__attribute__((__aligned__(16)));
};
+static inline uint16_t
+update_pq_avail_index(struct virtqueue *vq)
+{
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx = 0;
+ vq->vq_ring.avail_wrap_counter ^= 1;
+ }
+
+ return vq->vq_avail_idx;
+}
+
static inline void
vring_desc_init_packed(struct vring *vr, int n)
{
--
2.17.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v7 4/8] net/virtio: dump packed virtqueue data
2018-10-03 13:11 [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
` (2 preceding siblings ...)
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 3/8] net/virtio: add packed virtqueue helpers Jens Freimann
@ 2018-10-03 13:11 ` Jens Freimann
2018-10-04 13:23 ` Maxime Coquelin
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 5/8] net/virtio: implement transmit path for packed queues Jens Freimann
` (5 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Jens Freimann @ 2018-10-03 13:11 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Add support to dump packed virtqueue data to the
VIRTQUEUE_DUMP() macro.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
drivers/net/virtio/virtqueue.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index b55ace958..549962c28 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -377,6 +377,15 @@ virtqueue_notify(struct virtqueue *vq)
uint16_t used_idx, nused; \
used_idx = (vq)->vq_ring.used->idx; \
nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \
+ if (vtpci_packed_queue((vq)->hw)) { \
+ PMD_INIT_LOG(DEBUG, \
+ "VQ: - size=%d; free=%d; used_cons_idx=%d; avail_idx=%d", \
+ "VQ: - avail_wrap_counter=%d; used_wrap_counter=%d", \
+ (vq)->vq_nentries, (vq)->vq_free_cnt, (vq)->vq_used_cons_idx, \
+ (vq)->vq_avail_idx, (vq)->vq_ring.avail_wrap_counter, \
+ (vq)->vq_ring.used_wrap_counter); \
+ break; \
+ } \
PMD_INIT_LOG(DEBUG, \
"VQ: - size=%d; free=%d; used=%d; desc_head_idx=%d;" \
" avail.idx=%d; used_cons_idx=%d; used.idx=%d;" \
--
2.17.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v7 5/8] net/virtio: implement transmit path for packed queues
2018-10-03 13:11 [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
` (3 preceding siblings ...)
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 4/8] net/virtio: dump packed virtqueue data Jens Freimann
@ 2018-10-03 13:11 ` Jens Freimann
2018-10-10 7:27 ` Maxime Coquelin
2018-10-11 17:31 ` Maxime Coquelin
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 6/8] net/virtio: implement receive " Jens Freimann
` (4 subsequent siblings)
9 siblings, 2 replies; 22+ messages in thread
From: Jens Freimann @ 2018-10-03 13:11 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
This implements the transmit path for devices with
support for packed virtqueues.
Signed-off-by: Jens Freiman <jfreimann@redhat.com>
---
drivers/net/virtio/virtio_ethdev.c | 32 ++--
drivers/net/virtio/virtio_ethdev.h | 2 +
drivers/net/virtio/virtio_ring.h | 15 +-
drivers/net/virtio/virtio_rxtx.c | 276 +++++++++++++++++++++++++++++
drivers/net/virtio/virtqueue.h | 18 +-
5 files changed, 329 insertions(+), 14 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index d6a1613dd..c65ac365c 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -390,6 +390,8 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
vq->hw = hw;
vq->vq_queue_index = vtpci_queue_idx;
vq->vq_nentries = vq_size;
+ if (vtpci_packed_queue(hw))
+ vq->vq_ring.avail_wrap_counter = 1;
/*
* Reserve a memzone for vring elements
@@ -496,16 +498,22 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
memset(txr, 0, vq_size * sizeof(*txr));
for (i = 0; i < vq_size; i++) {
struct vring_desc *start_dp = txr[i].tx_indir;
-
- vring_desc_init_split(start_dp, RTE_DIM(txr[i].tx_indir));
-
+ struct vring_desc_packed*start_dp_packed = txr[i].tx_indir_pq;
+
/* first indirect descriptor is always the tx header */
- start_dp->addr = txvq->virtio_net_hdr_mem
- + i * sizeof(*txr)
- + offsetof(struct virtio_tx_region, tx_hdr);
-
- start_dp->len = hw->vtnet_hdr_size;
- start_dp->flags = VRING_DESC_F_NEXT;
+ if (vtpci_packed_queue(hw)) {
+ start_dp_packed->addr = txvq->virtio_net_hdr_mem
+ + i * sizeof(*txr)
+ + offsetof(struct virtio_tx_region, tx_hdr);
+ start_dp_packed->len = hw->vtnet_hdr_size;
+ } else {
+ vring_desc_init_split(start_dp, RTE_DIM(txr[i].tx_indir));
+ start_dp->addr = txvq->virtio_net_hdr_mem
+ + i * sizeof(*txr)
+ + offsetof(struct virtio_tx_region, tx_hdr);
+ start_dp->len = hw->vtnet_hdr_size;
+ start_dp->flags = VRING_DESC_F_NEXT;
+ }
}
}
@@ -1344,7 +1352,11 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
eth_dev->rx_pkt_burst = &virtio_recv_pkts;
}
- if (hw->use_inorder_tx) {
+ if (vtpci_packed_queue(hw)) {
+ PMD_INIT_LOG(INFO, "virtio: using virtio 1.1 Tx path on port %u",
+ eth_dev->data->port_id);
+ eth_dev->tx_pkt_burst = virtio_xmit_pkts_packed;
+ } else if (hw->use_inorder_tx) {
PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
eth_dev->data->port_id);
eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index e0f80e5a4..05d355180 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -82,6 +82,8 @@ uint16_t virtio_recv_mergeable_pkts_inorder(void *rx_queue,
uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
+uint16_t virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
uint16_t virtio_xmit_pkts_inorder(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index b9e63d4d4..dbffd4dcd 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -108,14 +108,25 @@ set_desc_avail(struct vring *vr, struct vring_desc_packed *desc)
}
static inline int
-desc_is_used(struct vring_desc_packed *desc, struct vring *vr)
+_desc_is_used(struct vring_desc_packed *desc)
{
uint16_t used, avail;
used = !!(desc->flags & VRING_DESC_F_USED(1));
avail = !!(desc->flags & VRING_DESC_F_AVAIL(1));
- return used == avail && used == vr->used_wrap_counter;
+ return used == avail;
+
+}
+
+static inline int
+desc_is_used(struct vring_desc_packed *desc, struct vring *vr)
+{
+ uint16_t used;
+
+ used = !!(desc->flags & VRING_DESC_F_USED(1));
+
+ return _desc_is_used(desc) && used == vr->used_wrap_counter;
}
/* The standard layout for the ring is a continuous chunk of memory which
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index eb891433e..4078fba8e 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -38,6 +38,7 @@
#define VIRTIO_DUMP_PACKET(m, len) do { } while (0)
#endif
+
int
virtio_dev_rx_queue_done(void *rxq, uint16_t offset)
{
@@ -88,6 +89,41 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
dp->next = VQ_RING_DESC_CHAIN_END;
}
+void
+vq_ring_free_chain_packed(struct virtqueue *vq, uint16_t desc_idx)
+{
+ struct vring_desc_packed *dp;
+ struct vq_desc_extra *dxp = NULL, *dxp_tail = NULL;
+ uint16_t desc_idx_last = desc_idx;
+
+ dp = &vq->vq_ring.desc_packed[desc_idx];
+ dxp = &vq->vq_descx[desc_idx];
+ vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt + dxp->ndescs);
+ if ((dp->flags & VRING_DESC_F_INDIRECT) == 0) {
+ while (dp->flags & VRING_DESC_F_NEXT) {
+ desc_idx_last = dxp->next;
+ dp = &vq->vq_ring.desc_packed[dxp->next];
+ dxp = &vq->vq_descx[dxp->next];
+ }
+ }
+
+ /*
+ * We must append the existing free chain, if any, to the end of
+ * newly freed chain. If the virtqueue was completely used, then
+ * head would be VQ_RING_DESC_CHAIN_END (ASSERTed above).
+ */
+ if (vq->vq_desc_tail_idx == VQ_RING_DESC_CHAIN_END) {
+ vq->vq_desc_head_idx = desc_idx;
+ }
+ else {
+ dxp_tail = &vq->vq_descx[vq->vq_desc_tail_idx];
+ dxp_tail->next = desc_idx;
+ }
+
+ vq->vq_desc_tail_idx = desc_idx_last;
+ dxp->next = VQ_RING_DESC_CHAIN_END;
+}
+
static uint16_t
virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
uint32_t *len, uint16_t num)
@@ -165,6 +201,33 @@ virtqueue_dequeue_rx_inorder(struct virtqueue *vq,
#endif
/* Cleanup from completed transmits. */
+static void
+virtio_xmit_cleanup_packed(struct virtqueue *vq)
+{
+ uint16_t used_idx, id;
+ uint16_t size = vq->vq_nentries;
+ struct vring_desc_packed *desc = vq->vq_ring.desc_packed;
+ struct vq_desc_extra *dxp;
+
+ used_idx = vq->vq_used_cons_idx;
+ while (desc_is_used(&desc[used_idx], &vq->vq_ring) &&
+ vq->vq_free_cnt < size) {
+ used_idx = vq->vq_used_cons_idx;
+ id = desc[used_idx].index;
+ dxp = &vq->vq_descx[id];
+ if (++vq->vq_used_cons_idx >= size) {
+ vq->vq_used_cons_idx -= size;
+ vq->vq_ring.used_wrap_counter ^= 1;
+ }
+ vq_ring_free_chain_packed(vq, id);
+ if (dxp->cookie != NULL) {
+ rte_pktmbuf_free(dxp->cookie);
+ dxp->cookie = NULL;
+ }
+ used_idx = vq->vq_used_cons_idx;
+ }
+}
+
static void
virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num)
{
@@ -456,6 +519,128 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
vq->vq_desc_head_idx = idx & (vq->vq_nentries - 1);
}
+static inline void
+virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
+ uint16_t needed, int use_indirect, int can_push,
+ int in_order)
+{
+ struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;
+ struct vq_desc_extra *dxp, *head_dxp;
+ struct virtqueue *vq = txvq->vq;
+ struct vring_desc_packed *start_dp, *head_dp;
+ uint16_t seg_num = cookie->nb_segs;
+ uint16_t idx, head_id;
+ uint16_t head_size = vq->hw->vtnet_hdr_size;
+ struct virtio_net_hdr *hdr;
+ int wrap_counter = vq->vq_ring.avail_wrap_counter;
+
+ head_id = vq->vq_desc_head_idx;
+ idx = head_id;
+ start_dp = vq->vq_ring.desc_packed;
+ dxp = &vq->vq_descx[idx];
+ dxp->ndescs = needed;
+
+ head_dp = &vq->vq_ring.desc_packed[head_id];
+ head_dxp = &vq->vq_descx[head_id];
+ head_dxp->cookie = (void *) cookie;
+
+ if (can_push) {
+ /* prepend cannot fail, checked by caller */
+ hdr = (struct virtio_net_hdr *)
+ rte_pktmbuf_prepend(cookie, head_size);
+ /* rte_pktmbuf_prepend() counts the hdr size to the pkt length,
+ * which is wrong. Below subtract restores correct pkt size.
+ */
+ cookie->pkt_len -= head_size;
+
+ /* if offload disabled, it is not zeroed below, do it now */
+ if (!vq->hw->has_tx_offload) {
+ ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->flags, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);
+ }
+ } else if (use_indirect) {
+ /* setup tx ring slot to point to indirect
+ * descriptor list stored in reserved region.
+ *
+ * the first slot in indirect ring is already preset
+ * to point to the header in reserved region
+ */
+ start_dp[idx].addr = txvq->virtio_net_hdr_mem +
+ RTE_PTR_DIFF(&txr[idx].tx_indir_pq, txr);
+ start_dp[idx].len = (seg_num + 1) * sizeof(struct vring_desc_packed);
+ start_dp[idx].flags = VRING_DESC_F_INDIRECT;
+ hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;
+
+ /* loop below will fill in rest of the indirect elements */
+ start_dp = txr[idx].tx_indir_pq;
+ idx = 1;
+ } else {
+ /* setup first tx ring slot to point to header
+ * stored in reserved region.
+ */
+ start_dp[idx].addr = txvq->virtio_net_hdr_mem +
+ RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
+ start_dp[idx].len = vq->hw->vtnet_hdr_size;
+ start_dp[idx].flags = VRING_DESC_F_NEXT;
+ start_dp[idx].flags |=
+ VRING_DESC_F_AVAIL(vq->vq_ring.avail_wrap_counter) |
+ VRING_DESC_F_USED(!vq->vq_ring.avail_wrap_counter);
+ hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;
+ idx = dxp->next;
+ }
+
+ virtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload);
+
+ do {
+ if (idx >= vq->vq_nentries) {
+ idx = 0;
+ vq->vq_ring.avail_wrap_counter ^= 1;
+ }
+ start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
+ start_dp[idx].len = cookie->data_len;
+ start_dp[idx].flags = cookie->next ? VRING_DESC_F_NEXT : 0;
+ start_dp[idx].flags |=
+ VRING_DESC_F_AVAIL(vq->vq_ring.avail_wrap_counter) |
+ VRING_DESC_F_USED(!vq->vq_ring.avail_wrap_counter);
+ if (use_indirect) {
+ if (++idx >= (seg_num + 1))
+ break;
+ } else {
+ dxp = &vq->vq_descx[idx];
+ idx = dxp->next;
+ }
+ } while ((cookie = cookie->next) != NULL);
+
+ if (use_indirect)
+ idx = vq->vq_ring.desc_packed[head_id].index;
+
+ vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed);
+
+ rte_smp_wmb();
+ if (needed > 1)
+ head_dp->flags = VRING_DESC_F_NEXT |
+ VRING_DESC_F_AVAIL(wrap_counter) |
+ VRING_DESC_F_USED(!wrap_counter);
+ rte_smp_mb();
+
+ if (!in_order) {
+ if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ vq->vq_desc_tail_idx = idx;
+ }
+ if (idx >= vq->vq_nentries) {
+ idx = 0;
+ vq->vq_ring.avail_wrap_counter ^= 1;
+ }
+ vq->vq_desc_head_idx = idx;
+ vq->vq_avail_idx = idx;
+
+}
+
+
static inline void
virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
uint16_t needed, int use_indirect, int can_push,
@@ -1346,6 +1531,97 @@ virtio_recv_mergeable_pkts(void *rx_queue,
return nb_rx;
}
+uint16_t
+virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct virtnet_tx *txvq = tx_queue;
+ struct virtqueue *vq = txvq->vq;
+ struct virtio_hw *hw = vq->hw;
+ uint16_t hdr_size = hw->vtnet_hdr_size;
+ uint16_t nb_tx = 0;
+ int error;
+
+ if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts))
+ return nb_tx;
+
+ if (unlikely(nb_pkts < 1))
+ return nb_pkts;
+
+ PMD_TX_LOG(DEBUG, "%d packets to xmit", nb_pkts);
+
+ virtio_rmb();
+ if (likely(nb_pkts > vq->vq_nentries - vq->vq_free_thresh))
+ virtio_xmit_cleanup_packed(vq);
+
+ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+ struct rte_mbuf *txm = tx_pkts[nb_tx];
+ int can_push = 0, use_indirect = 0, slots, need;
+
+ /* Do VLAN tag insertion */
+ if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) {
+ error = rte_vlan_insert(&txm);
+ if (unlikely(error)) {
+ rte_pktmbuf_free(txm);
+ continue;
+ }
+ }
+
+ /* optimize ring usage */
+ if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) ||
+ vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) &&
+ rte_mbuf_refcnt_read(txm) == 1 &&
+ RTE_MBUF_DIRECT(txm) &&
+ txm->nb_segs == 1 &&
+ rte_pktmbuf_headroom(txm) >= hdr_size &&
+ rte_is_aligned(rte_pktmbuf_mtod(txm, char *),
+ __alignof__(struct virtio_net_hdr_mrg_rxbuf)))
+ can_push = 1;
+ else if (vtpci_with_feature(hw, VIRTIO_RING_F_INDIRECT_DESC) &&
+ txm->nb_segs < VIRTIO_MAX_TX_INDIRECT)
+ use_indirect = 1;
+
+ /* How many main ring entries are needed to this Tx?
+ * any_layout => number of segments
+ * indirect => 1
+ * default => number of segments + 1
+ */
+ slots = use_indirect ? 1 : (txm->nb_segs + !can_push);
+ need = slots - vq->vq_free_cnt;
+
+ /* Positive value indicates it need free vring descriptors */
+ if (unlikely(need > 0)) {
+ virtio_rmb();
+ need = RTE_MIN(need, (int)nb_pkts);
+
+ virtio_xmit_cleanup_packed(vq);
+ need = slots - vq->vq_free_cnt;
+ if (unlikely(need > 0)) {
+ PMD_TX_LOG(ERR,
+ "No free tx descriptors to transmit");
+ break;
+ }
+ }
+
+ /* Enqueue Packet buffers */
+ virtqueue_enqueue_xmit_packed(txvq, txm, slots, use_indirect,
+ can_push, 0);
+
+ txvq->stats.bytes += txm->pkt_len;
+ virtio_update_packet_stats(&txvq->stats, txm);
+ }
+
+ txvq->stats.packets += nb_tx;
+
+ if (likely(nb_tx)) {
+ if (unlikely(virtqueue_kick_prepare_packed(vq))) {
+ virtqueue_notify(vq);
+ PMD_TX_LOG(DEBUG, "Notified backend after xmit");
+ }
+ }
+
+ return nb_tx;
+}
+
uint16_t
virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 549962c28..58f600648 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -242,8 +242,12 @@ struct virtio_net_hdr_mrg_rxbuf {
#define VIRTIO_MAX_TX_INDIRECT 8
struct virtio_tx_region {
struct virtio_net_hdr_mrg_rxbuf tx_hdr;
- struct vring_desc tx_indir[VIRTIO_MAX_TX_INDIRECT]
- __attribute__((__aligned__(16)));
+ union {
+ struct vring_desc tx_indir[VIRTIO_MAX_TX_INDIRECT]
+ __attribute__((__aligned__(16)));
+ struct vring_desc_packed tx_indir_pq[VIRTIO_MAX_TX_INDIRECT]
+ __attribute__((__aligned__(16)));
+ };
};
static inline uint16_t
@@ -328,6 +332,7 @@ virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx)
#define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx))
void vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx);
+void vq_ring_free_chain_packed(struct virtqueue *vq, uint16_t desc_idx);
void vq_ring_free_inorder(struct virtqueue *vq, uint16_t desc_idx,
uint16_t num);
@@ -361,6 +366,15 @@ virtqueue_kick_prepare(struct virtqueue *vq)
return !(vq->vq_ring.used->flags & VRING_USED_F_NO_NOTIFY);
}
+static inline int
+virtqueue_kick_prepare_packed(struct virtqueue *vq)
+{
+ uint16_t flags;
+
+ flags = vq->vq_ring.device_event->desc_event_flags & RING_EVENT_FLAGS_DESC;
+ return (flags != RING_EVENT_FLAGS_DISABLE);
+}
+
static inline void
virtqueue_notify(struct virtqueue *vq)
{
--
2.17.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v7 6/8] net/virtio: implement receive path for packed queues
2018-10-03 13:11 [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
` (4 preceding siblings ...)
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 5/8] net/virtio: implement transmit path for packed queues Jens Freimann
@ 2018-10-03 13:11 ` Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 7/8] net/virtio: add virtio send command packed queue support Jens Freimann
` (3 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Jens Freimann @ 2018-10-03 13:11 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Implement the receive part.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
drivers/net/virtio/virtio_ethdev.c | 15 +-
drivers/net/virtio/virtio_ethdev.h | 2 +
drivers/net/virtio/virtio_rxtx.c | 268 +++++++++++++++++++++++++++--
drivers/net/virtio/virtqueue.c | 23 +++
drivers/net/virtio/virtqueue.h | 1 +
5 files changed, 292 insertions(+), 17 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index c65ac365c..72bef1a44 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -390,8 +390,10 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
vq->hw = hw;
vq->vq_queue_index = vtpci_queue_idx;
vq->vq_nentries = vq_size;
- if (vtpci_packed_queue(hw))
+ if (vtpci_packed_queue(hw)) {
vq->vq_ring.avail_wrap_counter = 1;
+ vq->vq_ring.used_wrap_counter = 1;
+ }
/*
* Reserve a memzone for vring elements
@@ -1332,7 +1334,13 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
{
struct virtio_hw *hw = eth_dev->data->dev_private;
- if (hw->use_simple_rx) {
+ if (vtpci_packed_queue(hw)) {
+ if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
+ eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts;
+ } else {
+ eth_dev->rx_pkt_burst = &virtio_recv_pkts_packed;
+ }
+ } else if (hw->use_simple_rx) {
PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u",
eth_dev->data->port_id);
eth_dev->rx_pkt_burst = virtio_recv_pkts_vec;
@@ -1496,7 +1504,8 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
/* Setting up rx_header size for the device */
if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF) ||
- vtpci_with_feature(hw, VIRTIO_F_VERSION_1))
+ vtpci_with_feature(hw, VIRTIO_F_VERSION_1) ||
+ vtpci_with_feature(hw, VIRTIO_F_RING_PACKED))
hw->vtnet_hdr_size = sizeof(struct virtio_net_hdr_mrg_rxbuf);
else
hw->vtnet_hdr_size = sizeof(struct virtio_net_hdr);
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index 05d355180..6c9247639 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -73,6 +73,8 @@ int virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev,
uint16_t virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
+uint16_t virtio_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
uint16_t virtio_recv_mergeable_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 4078fba8e..610a3962e 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -31,6 +31,7 @@
#include "virtqueue.h"
#include "virtio_rxtx.h"
#include "virtio_rxtx_simple.h"
+#include "virtio_ring.h"
#ifdef RTE_LIBRTE_VIRTIO_DEBUG_DUMP
#define VIRTIO_DUMP_PACKET(m, len) rte_pktmbuf_dump(stdout, m, len)
@@ -124,6 +125,47 @@ vq_ring_free_chain_packed(struct virtqueue *vq, uint16_t desc_idx)
dxp->next = VQ_RING_DESC_CHAIN_END;
}
+static uint16_t
+virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq,
+ struct rte_mbuf **rx_pkts,
+ uint32_t *len,
+ uint16_t num)
+{
+ struct rte_mbuf *cookie;
+ uint16_t used_idx;
+ uint16_t id;
+ struct vring_desc_packed *desc;
+ uint16_t i;
+
+ for (i = 0; i < num; i++) {
+ used_idx = vq->vq_used_cons_idx;
+ desc = (struct vring_desc_packed *) vq->vq_ring.desc_packed;
+ if (!desc_is_used(&desc[used_idx], &vq->vq_ring))
+ return i;
+ len[i] = desc[used_idx].len;
+ id = desc[used_idx].index;
+ cookie = (struct rte_mbuf *)vq->vq_descx[id].cookie;
+
+ if (unlikely(cookie == NULL)) {
+ PMD_DRV_LOG(ERR, "vring descriptor with no mbuf cookie at %u",
+ vq->vq_used_cons_idx);
+ break;
+ }
+ rte_prefetch0(cookie);
+ rte_packet_prefetch(rte_pktmbuf_mtod(cookie, void *));
+ rx_pkts[i] = cookie;
+ vq_ring_free_chain_packed(vq, id);
+
+
+ if (++vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx = 0;
+ vq->vq_ring.used_wrap_counter ^= 1;
+ }
+ }
+
+ return i;
+}
+
static uint16_t
virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
uint32_t *len, uint16_t num)
@@ -369,6 +411,51 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie)
return 0;
}
+static inline int
+virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq, struct rte_mbuf *cookie)
+{
+ struct vq_desc_extra *dxp;
+ struct virtio_hw *hw = vq->hw;
+ struct vring_desc_packed *start_dp;
+ uint16_t needed = 1;
+ uint16_t head_idx, idx;
+
+ if (unlikely(vq->vq_free_cnt == 0))
+ return -ENOSPC;
+ if (unlikely(vq->vq_free_cnt < needed))
+ return -EMSGSIZE;
+
+ head_idx = vq->vq_desc_head_idx;
+ if (unlikely(head_idx >= vq->vq_nentries))
+ return -EFAULT;
+
+ idx = head_idx;
+ dxp = &vq->vq_descx[idx];
+ dxp->cookie = (void *)cookie;
+ dxp->ndescs = needed;
+
+ start_dp = vq->vq_ring.desc_packed;
+ start_dp[idx].addr =
+ VIRTIO_MBUF_ADDR(cookie, vq) +
+ RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
+ start_dp[idx].len =
+ cookie->buf_len - RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size;
+ start_dp[idx].flags = VRING_DESC_F_WRITE;
+ idx = dxp->next;
+ vq->vq_desc_head_idx = idx;
+ if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ vq->vq_desc_tail_idx = idx;
+ vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed);
+
+ set_desc_avail(&vq->vq_ring, &vq->vq_ring.desc_packed[head_idx]);
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_ring.avail_wrap_counter ^= 1;
+ }
+ rte_smp_wmb();
+
+ return 0;
+}
/* When doing TSO, the IP length is not included in the pseudo header
* checksum of the packet given to the PMD, but for virtio it is
* expected.
@@ -789,6 +876,29 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)
PMD_INIT_FUNC_TRACE();
+ if (vtpci_packed_queue(hw)) {
+ struct vring_desc_packed *desc;
+ struct vq_desc_extra *dxp;
+
+ for (desc_idx = 0; desc_idx < vq->vq_nentries;
+ desc_idx++) {
+ m = rte_mbuf_raw_alloc(rxvq->mpool);
+ if (unlikely(m == NULL))
+ return -ENOMEM;
+
+ dxp = &vq->vq_descx[desc_idx];
+ dxp->cookie = m;
+ dxp->ndescs = 1;
+
+ desc = &vq->vq_ring.desc_packed[desc_idx];
+ desc->addr = VIRTIO_MBUF_ADDR(m, vq) +
+ RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
+ desc->len = m->buf_len - RTE_PKTMBUF_HEADROOM +
+ hw->vtnet_hdr_size;
+ desc->flags = VRING_DESC_F_WRITE;
+ }
+ }
+
/* Allocate blank mbufs for the each rx descriptor */
nbufs = 0;
@@ -841,7 +951,10 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)
break;
/* Enqueue allocated buffers */
- error = virtqueue_enqueue_recv_refill(vq, m);
+ if (vtpci_packed_queue(vq->hw))
+ error = virtqueue_enqueue_recv_refill_packed(vq, m);
+ else
+ error = virtqueue_enqueue_recv_refill(vq, m);
if (error) {
rte_pktmbuf_free(m);
break;
@@ -849,7 +962,8 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)
nbufs++;
}
- vq_update_avail_idx(vq);
+ if (!vtpci_packed_queue(vq->hw))
+ vq_update_avail_idx(vq);
}
PMD_INIT_LOG(DEBUG, "Allocated %d bufs", nbufs);
@@ -1173,6 +1287,109 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
return nb_rx;
}
+uint16_t
+virtio_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+ struct virtnet_rx *rxvq = rx_queue;
+ struct virtqueue *vq = rxvq->vq;
+ struct virtio_hw *hw = vq->hw;
+ struct rte_mbuf *rxm, *new_mbuf;
+ uint16_t nb_used, num, nb_rx;
+ uint32_t len[VIRTIO_MBUF_BURST_SZ];
+ struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ];
+ int error;
+ uint32_t i, nb_enqueued;
+ uint32_t hdr_size;
+ struct virtio_net_hdr *hdr;
+
+ nb_rx = 0;
+ if (unlikely(hw->started == 0))
+ return nb_rx;
+
+ nb_used = VIRTIO_MBUF_BURST_SZ;
+
+ virtio_rmb();
+
+ num = likely(nb_used <= nb_pkts) ? nb_used : nb_pkts;
+ if (unlikely(num > VIRTIO_MBUF_BURST_SZ))
+ num = VIRTIO_MBUF_BURST_SZ;
+ if (likely(num > DESC_PER_CACHELINE))
+ num = num - ((vq->vq_used_cons_idx + num) % DESC_PER_CACHELINE);
+
+ num = virtqueue_dequeue_burst_rx_packed(vq, rcv_pkts, len, num);
+ PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num);
+
+ nb_enqueued = 0;
+ hdr_size = hw->vtnet_hdr_size;
+
+ for (i = 0; i < num ; i++) {
+ rxm = rcv_pkts[i];
+
+ PMD_RX_LOG(DEBUG, "packet len:%d", len[i]);
+
+ if (unlikely(len[i] < hdr_size + ETHER_HDR_LEN)) {
+ PMD_RX_LOG(ERR, "Packet drop");
+ nb_enqueued++;
+ virtio_discard_rxbuf(vq, rxm);
+ rxvq->stats.errors++;
+ continue;
+ }
+
+ rxm->port = rxvq->port_id;
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rxm->ol_flags = 0;
+ rxm->vlan_tci = 0;
+
+ rxm->pkt_len = (uint32_t)(len[i] - hdr_size);
+ rxm->data_len = (uint16_t)(len[i] - hdr_size);
+
+ hdr = (struct virtio_net_hdr *)((char *)rxm->buf_addr +
+ RTE_PKTMBUF_HEADROOM - hdr_size);
+
+ if (hw->vlan_strip)
+ rte_vlan_strip(rxm);
+
+ if (hw->has_rx_offload && virtio_rx_offload(rxm, hdr) < 0) {
+ virtio_discard_rxbuf(vq, rxm);
+ rxvq->stats.errors++;
+ continue;
+ }
+
+ virtio_rx_stats_updated(rxvq, rxm);
+
+ rx_pkts[nb_rx++] = rxm;
+ }
+
+ rxvq->stats.packets += nb_rx;
+
+ /* Allocate new mbuf for the used descriptor */
+ while (likely(!virtqueue_full(vq))) {
+ new_mbuf = rte_mbuf_raw_alloc(rxvq->mpool);
+ if (unlikely(new_mbuf == NULL)) {
+ struct rte_eth_dev *dev
+ = &rte_eth_devices[rxvq->port_id];
+ dev->data->rx_mbuf_alloc_failed++;
+ break;
+ }
+ error = virtqueue_enqueue_recv_refill_packed(vq, new_mbuf);
+ if (unlikely(error)) {
+ rte_pktmbuf_free(new_mbuf);
+ break;
+ }
+ nb_enqueued++;
+ }
+
+ if (likely(nb_enqueued)) {
+ if (unlikely(virtqueue_kick_prepare_packed(vq))) {
+ virtqueue_notify(vq);
+ PMD_RX_LOG(DEBUG, "Notified");
+ }
+ }
+
+ return nb_rx;
+}
+
+
uint16_t
virtio_recv_mergeable_pkts_inorder(void *rx_queue,
struct rte_mbuf **rx_pkts,
@@ -1379,12 +1596,16 @@ virtio_recv_mergeable_pkts(void *rx_queue,
uint16_t extra_idx;
uint32_t seg_res;
uint32_t hdr_size;
+ uint32_t rx_num = 0;
nb_rx = 0;
if (unlikely(hw->started == 0))
return nb_rx;
- nb_used = VIRTQUEUE_NUSED(vq);
+ if (vtpci_packed_queue(vq->hw))
+ nb_used = VIRTIO_MBUF_BURST_SZ;
+ else
+ nb_used = VIRTQUEUE_NUSED(vq);
virtio_rmb();
@@ -1397,13 +1618,21 @@ virtio_recv_mergeable_pkts(void *rx_queue,
seg_res = 0;
hdr_size = hw->vtnet_hdr_size;
+ vq->vq_used_idx = vq->vq_used_cons_idx;
+
while (i < nb_used) {
struct virtio_net_hdr_mrg_rxbuf *header;
if (nb_rx == nb_pkts)
break;
- num = virtqueue_dequeue_burst_rx(vq, rcv_pkts, len, 1);
+ if (vtpci_packed_queue(vq->hw))
+ num = virtqueue_dequeue_burst_rx_packed(vq, rcv_pkts,
+ len, 1);
+ else
+ num = virtqueue_dequeue_burst_rx(vq, rcv_pkts, len, 1);
+ if (num == 0)
+ return nb_rx;
if (num != 1)
continue;
@@ -1455,12 +1684,14 @@ virtio_recv_mergeable_pkts(void *rx_queue,
*/
uint16_t rcv_cnt =
RTE_MIN(seg_res, RTE_DIM(rcv_pkts));
- if (likely(VIRTQUEUE_NUSED(vq) >= rcv_cnt)) {
- uint32_t rx_num =
- virtqueue_dequeue_burst_rx(vq,
- rcv_pkts, len, rcv_cnt);
- i += rx_num;
- rcv_cnt = rx_num;
+ if (vtpci_packed_queue(vq->hw)) {
+ if (likely(vq->vq_free_cnt >= rcv_cnt)) {
+ rx_num = virtqueue_dequeue_burst_rx_packed(vq,
+ rcv_pkts, len, rcv_cnt);
+ }
+ } else if (likely(VIRTQUEUE_NUSED(vq) >= rcv_cnt)) {
+ rx_num = virtqueue_dequeue_burst_rx(vq,
+ rcv_pkts, len, rcv_cnt);
} else {
PMD_RX_LOG(ERR,
"No enough segments for packet.");
@@ -1469,6 +1700,8 @@ virtio_recv_mergeable_pkts(void *rx_queue,
rxvq->stats.errors++;
break;
}
+ i += rx_num;
+ rcv_cnt = rx_num;
extra_idx = 0;
@@ -1511,7 +1744,10 @@ virtio_recv_mergeable_pkts(void *rx_queue,
dev->data->rx_mbuf_alloc_failed++;
break;
}
- error = virtqueue_enqueue_recv_refill(vq, new_mbuf);
+ if (vtpci_packed_queue(vq->hw))
+ error = virtqueue_enqueue_recv_refill_packed(vq, new_mbuf);
+ else
+ error = virtqueue_enqueue_recv_refill(vq, new_mbuf);
if (unlikely(error)) {
rte_pktmbuf_free(new_mbuf);
break;
@@ -1520,9 +1756,13 @@ virtio_recv_mergeable_pkts(void *rx_queue,
}
if (likely(nb_enqueued)) {
- vq_update_avail_idx(vq);
-
- if (unlikely(virtqueue_kick_prepare(vq))) {
+ if (likely(!vtpci_packed_queue(vq->hw))) {
+ vq_update_avail_idx(vq);
+ if (unlikely(virtqueue_kick_prepare(vq))) {
+ virtqueue_notify(vq);
+ PMD_RX_LOG(DEBUG, "Notified");
+ }
+ } else if (virtqueue_kick_prepare_packed(vq)) {
virtqueue_notify(vq);
PMD_RX_LOG(DEBUG, "Notified");
}
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 56a77cc71..6b541d789 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -58,12 +58,35 @@ virtqueue_detach_unused(struct virtqueue *vq)
void
virtqueue_rxvq_flush(struct virtqueue *vq)
{
+ struct vring_desc_packed *descs = vq->vq_ring.desc_packed;
struct virtnet_rx *rxq = &vq->rxq;
struct virtio_hw *hw = vq->hw;
struct vring_used_elem *uep;
struct vq_desc_extra *dxp;
uint16_t used_idx, desc_idx;
uint16_t nb_used, i;
+ uint16_t size = vq->vq_nentries;
+
+ if (vtpci_packed_queue(vq->hw)) {
+ return;
+ i = vq->vq_used_cons_idx;
+ if (i > size) {
+ PMD_INIT_LOG(ERR, "vq_used_cons_idx out of range, %d", vq->vq_used_cons_idx);
+ return;
+ }
+ while (desc_is_used(&descs[i], &vq->vq_ring) &&
+ i < size) {
+ dxp = &vq->vq_descx[descs[i].index];
+ if (dxp->cookie != NULL) {
+ rte_pktmbuf_free(dxp->cookie);
+ dxp->cookie = NULL;
+ }
+ vq_ring_free_chain_packed(vq, i);
+ i = dxp->next;
+ vq->vq_used_cons_idx++;
+ }
+ return;
+ }
nb_used = VIRTQUEUE_NUSED(vq);
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 58f600648..064df94fb 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -172,6 +172,7 @@ struct virtqueue {
* trails vq_ring.used->idx.
*/
uint16_t vq_used_cons_idx;
+ uint16_t vq_used_idx;
uint16_t vq_nentries; /**< vring desc numbers */
uint16_t vq_free_cnt; /**< num of desc available */
uint16_t vq_avail_idx; /**< sync until needed */
--
2.17.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v7 7/8] net/virtio: add virtio send command packed queue support
2018-10-03 13:11 [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
` (5 preceding siblings ...)
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 6/8] net/virtio: implement receive " Jens Freimann
@ 2018-10-03 13:11 ` Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 8/8] net/virtio: enable packed virtqueues by default Jens Freimann
` (2 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Jens Freimann @ 2018-10-03 13:11 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Use packed virtqueue format when reading and writing descriptors
to/from the ring.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
drivers/net/virtio/virtio_ethdev.c | 90 ++++++++++++++++++++++++++++++
1 file changed, 90 insertions(+)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 72bef1a44..a01fbf8b1 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -141,6 +141,90 @@ static const struct rte_virtio_xstats_name_off rte_virtio_txq_stat_strings[] = {
struct virtio_hw_internal virtio_hw_internal[RTE_MAX_ETHPORTS];
+static struct virtio_pmd_ctrl *
+virtio_pq_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int pkt_num)
+{
+ struct virtqueue *vq = cvq->vq;
+ int head;
+ struct vring_desc_packed *desc = vq->vq_ring.desc_packed;
+ struct virtio_pmd_ctrl *result;
+ int wrap_counter;
+ int sum = 0;
+ int k;
+
+ /*
+ * Format is enforced in qemu code:
+ * One TX packet for header;
+ * At least one TX packet per argument;
+ * One RX packet for ACK.
+ */
+ head = vq->vq_avail_idx;
+ wrap_counter = vq->vq_ring.avail_wrap_counter;
+ desc[head].flags = VRING_DESC_F_NEXT;
+ desc[head].addr = cvq->virtio_net_hdr_mem;
+ desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
+ vq->vq_free_cnt--;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_ring.avail_wrap_counter ^= 1;
+ }
+
+ for (k = 0; k < pkt_num; k++) {
+ desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
+ + sizeof(struct virtio_net_ctrl_hdr)
+ + sizeof(ctrl->status) + sizeof(uint8_t) * sum;
+ desc[vq->vq_avail_idx].len = dlen[k];
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT;
+ sum += dlen[k];
+ vq->vq_free_cnt--;
+ _set_desc_avail(&desc[vq->vq_avail_idx],
+ vq->vq_ring.avail_wrap_counter);
+ rte_smp_wmb();
+ vq->vq_free_cnt--;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_ring.avail_wrap_counter ^= 1;
+ }
+ }
+
+
+ desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
+ + sizeof(struct virtio_net_ctrl_hdr);
+ desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE;
+ _set_desc_avail(&desc[vq->vq_avail_idx],
+ vq->vq_ring.avail_wrap_counter);
+ _set_desc_avail(&desc[head], wrap_counter);
+ rte_smp_wmb();
+
+ vq->vq_free_cnt--;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_ring.avail_wrap_counter ^= 1;
+ }
+
+ virtqueue_notify(vq);
+
+ /* wait for used descriptors in virtqueue */
+ do {
+ rte_rmb();
+ usleep(100);
+ } while (!_desc_is_used(&desc[head]));
+
+ /* now get used descriptors */
+ while(desc_is_used(&desc[vq->vq_used_cons_idx], &vq->vq_ring)) {
+ vq->vq_free_cnt++;
+ if (++vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx -= vq->vq_nentries;
+ vq->vq_ring.used_wrap_counter ^= 1;
+ }
+ }
+
+ result = cvq->virtio_net_hdr_mz->addr;
+ return result;
+}
+
static int
virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
int *dlen, int pkt_num)
@@ -174,6 +258,11 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
memcpy(cvq->virtio_net_hdr_mz->addr, ctrl,
sizeof(struct virtio_pmd_ctrl));
+ if (vtpci_packed_queue(vq->hw)) {
+ result = virtio_pq_send_command(cvq, ctrl, dlen, pkt_num);
+ goto out_unlock;
+ }
+
/*
* Format is enforced in qemu code:
* One TX packet for header;
@@ -245,6 +334,7 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
result = cvq->virtio_net_hdr_mz->addr;
+out_unlock:
rte_spinlock_unlock(&cvq->lock);
return result->status;
}
--
2.17.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v7 8/8] net/virtio: enable packed virtqueues by default
2018-10-03 13:11 [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
` (6 preceding siblings ...)
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 7/8] net/virtio: add virtio send command packed queue support Jens Freimann
@ 2018-10-03 13:11 ` Jens Freimann
2018-10-03 13:19 ` [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
2018-10-04 13:59 ` Maxime Coquelin
9 siblings, 0 replies; 22+ messages in thread
From: Jens Freimann @ 2018-10-03 13:11 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
drivers/net/virtio/virtio_ethdev.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index 6c9247639..d9b4feee2 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -34,6 +34,7 @@
1u << VIRTIO_RING_F_INDIRECT_DESC | \
1ULL << VIRTIO_F_VERSION_1 | \
1ULL << VIRTIO_F_IN_ORDER | \
+ 1ULL << VIRTIO_F_RING_PACKED | \
1ULL << VIRTIO_F_IOMMU_PLATFORM)
#define VIRTIO_PMD_SUPPORTED_GUEST_FEATURES \
--
2.17.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues
2018-10-03 13:11 [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
` (7 preceding siblings ...)
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 8/8] net/virtio: enable packed virtqueues by default Jens Freimann
@ 2018-10-03 13:19 ` Jens Freimann
2018-10-04 13:59 ` Maxime Coquelin
9 siblings, 0 replies; 22+ messages in thread
From: Jens Freimann @ 2018-10-03 13:19 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
On Wed, Oct 03, 2018 at 03:11:10PM +0200, Jens Freimann wrote:
>I'm sending this out to get some comments especially on the TX path code.
Tested with this QEMU: https://github.com/jensfr/qemu/pull/new/wexu-packed-ring-v1-fixed
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/8] net/virtio: vring init for packed queues
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 1/8] net/virtio: vring init for packed queues Jens Freimann
@ 2018-10-04 11:54 ` Maxime Coquelin
2018-10-05 8:10 ` Jens Freimann
0 siblings, 1 reply; 22+ messages in thread
From: Maxime Coquelin @ 2018-10-04 11:54 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
On 10/03/2018 03:11 PM, Jens Freimann wrote:
> Add and initialize descriptor data structures.
>
> To allow out of order processing a .next field was added to
> struct vq_desc_extra because there is none in the packed virtqueue
> descriptor itself. This is used to chain descriptors and process them
> similiar to how it is handled for split virtqueues.
>
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 28 +++++++++------
> drivers/net/virtio/virtio_pci.h | 8 +++++
> drivers/net/virtio/virtio_ring.h | 55 +++++++++++++++++++++++++++---
> drivers/net/virtio/virtqueue.h | 13 ++++++-
> 4 files changed, 88 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index b81df0a99..d6a1613dd 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -299,19 +299,27 @@ virtio_init_vring(struct virtqueue *vq)
>
> PMD_INIT_FUNC_TRACE();
>
> - /*
> - * Reinitialise since virtio port might have been stopped and restarted
> - */
> memset(ring_mem, 0, vq->vq_ring_size);
> - vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
> + vring_init(vq->hw, vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
> +
> + vq->vq_free_cnt = vq->vq_nentries;
> + memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
> vq->vq_used_cons_idx = 0;
> + vq->vq_avail_idx = 0;
> vq->vq_desc_head_idx = 0;
> - vq->vq_avail_idx = 0;
> vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
> - vq->vq_free_cnt = vq->vq_nentries;
> - memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
> + if (vtpci_packed_queue(vq->hw)) {
> + uint16_t i;
> + for(i = 0; i < size - 1; i++) {
> + vq->vq_descx[i].next = i + 1;
I would move it in a dedicated loop, and do it only if IN_ORDER hasn't
been negotiated. Not for performance reason of course, but just to
highlight that this extra stuff isn't needed with in-order.
> + vq->vq_ring.desc_packed[i].index = i;
I would use the vring_desc_init_packed function declared below instead.
> + }
Trailing space.
> + vq->vq_ring.desc_packed[i].index = i;
> + vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END;
> + } else {
>
> - vring_desc_init(vr->desc, size);
> + vring_desc_init_split(vr->desc, size);
> + }
>
> /*
> * Disable device(host) interrupting guest
> @@ -386,7 +394,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
> /*
> * Reserve a memzone for vring elements
> */
> - size = vring_size(vq_size, VIRTIO_PCI_VRING_ALIGN);
> + size = vring_size(hw, vq_size, VIRTIO_PCI_VRING_ALIGN);
> vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
> PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d",
> size, vq->vq_ring_size);
> @@ -489,7 +497,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
> for (i = 0; i < vq_size; i++) {
> struct vring_desc *start_dp = txr[i].tx_indir;
>
> - vring_desc_init(start_dp, RTE_DIM(txr[i].tx_indir));
> + vring_desc_init_split(start_dp, RTE_DIM(txr[i].tx_indir));
>
> /* first indirect descriptor is always the tx header */
> start_dp->addr = txvq->virtio_net_hdr_mem
> diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
> index 58fdd3d45..90204d281 100644
> --- a/drivers/net/virtio/virtio_pci.h
> +++ b/drivers/net/virtio/virtio_pci.h
> @@ -113,6 +113,8 @@ struct virtnet_ctl;
>
> #define VIRTIO_F_VERSION_1 32
> #define VIRTIO_F_IOMMU_PLATFORM 33
> +#define VIRTIO_F_RING_PACKED 34
> +#define VIRTIO_F_IN_ORDER 35
Isn't that feature already declared?
>
> /*
> * Some VirtIO feature bits (currently bits 28 through 31) are
> @@ -314,6 +316,12 @@ vtpci_with_feature(struct virtio_hw *hw, uint64_t bit)
> return (hw->guest_features & (1ULL << bit)) != 0;
> }
>
> +static inline int
> +vtpci_packed_queue(struct virtio_hw *hw)
> +{
> + return vtpci_with_feature(hw, VIRTIO_F_RING_PACKED);
> +}
> +
> /*
> * Function declaration from virtio_pci.c
> */
> diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
> index 9e3c2a015..309069fdb 100644
> --- a/drivers/net/virtio/virtio_ring.h
> +++ b/drivers/net/virtio/virtio_ring.h
> @@ -54,11 +54,38 @@ struct vring_used {
> struct vring_used_elem ring[0];
> };
>
> +/* For support of packed virtqueues in Virtio 1.1 the format of descriptors
> + * looks like this.
> + */
> +struct vring_desc_packed {
> + uint64_t addr;
> + uint32_t len;
> + uint16_t index;
> + uint16_t flags;
> +} __attribute__ ((aligned(16)));
> +
> +#define RING_EVENT_FLAGS_ENABLE 0x0
> +#define RING_EVENT_FLAGS_DISABLE 0x1
> +#define RING_EVENT_FLAGS_DESC 0x2
> +struct vring_packed_desc_event {
> + uint16_t desc_event_off_wrap;
> + uint16_t desc_event_flags;
> +};
> +
> struct vring {
> unsigned int num;
> - struct vring_desc *desc;
> - struct vring_avail *avail;
> - struct vring_used *used;
> + union {
> + struct vring_desc_packed *desc_packed;
> + struct vring_desc *desc;
> + };
> + union {
> + struct vring_avail *avail;
> + struct vring_packed_desc_event *driver_event;
> + };
> + union {
> + struct vring_used *used;
> + struct vring_packed_desc_event *device_event;
> + };
> };
>
> /* The standard layout for the ring is a continuous chunk of memory which
> @@ -95,10 +122,18 @@ struct vring {
> #define vring_avail_event(vr) (*(uint16_t *)&(vr)->used->ring[(vr)->num])
>
> static inline size_t
> -vring_size(unsigned int num, unsigned long align)
> +vring_size(struct virtio_hw *hw, unsigned int num, unsigned long align)
> {
> size_t size;
>
> + if (vtpci_packed_queue(hw)) {
> + size = num * sizeof(struct vring_desc_packed);
> + size += sizeof(struct vring_packed_desc_event);
> + size = RTE_ALIGN_CEIL(size, align);
> + size += sizeof(struct vring_packed_desc_event);
> + return size;
> + }
> +
> size = num * sizeof(struct vring_desc);
> size += sizeof(struct vring_avail) + (num * sizeof(uint16_t));
> size = RTE_ALIGN_CEIL(size, align);
> @@ -108,10 +143,20 @@ vring_size(unsigned int num, unsigned long align)
> }
>
> static inline void
> -vring_init(struct vring *vr, unsigned int num, uint8_t *p,
> +vring_init(struct virtio_hw *hw, struct vring *vr, unsigned int num, uint8_t *p,
> unsigned long align)
> {
> vr->num = num;
> + if (vtpci_packed_queue(hw)) {
> + vr->desc_packed = (struct vring_desc_packed *)p;
> + vr->driver_event = (struct vring_packed_desc_event *)(p +
> + num * sizeof(struct vring_desc_packed));
> + vr->device_event = (struct vring_packed_desc_event *)
> + RTE_ALIGN_CEIL((uintptr_t)(vr->driver_event +
> + sizeof(struct vring_packed_desc_event)), align);
> + return;
> + }
> +
As a general comment, I would find it cleaner to have dedicated
functions for split and packed variants, like vring_init_split,
vring_init_packed, etc...
> vr->desc = (struct vring_desc *) p;
> vr->avail = (struct vring_avail *) (p +
> num * sizeof(struct vring_desc));
> diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
> index 26518ed98..6a4f92b79 100644
> --- a/drivers/net/virtio/virtqueue.h
> +++ b/drivers/net/virtio/virtqueue.h
> @@ -161,6 +161,7 @@ struct virtio_pmd_ctrl {
> struct vq_desc_extra {
> void *cookie;
> uint16_t ndescs;
> + uint16_t next;
> };
>
> struct virtqueue {
> @@ -245,9 +246,19 @@ struct virtio_tx_region {
> __attribute__((__aligned__(16)));
> };
>
> +static inline void
> +vring_desc_init_packed(struct vring *vr, int n)
> +{
> + int i;
> + for (i = 0; i < n; i++) {
> + struct vring_desc_packed *desc = &vr->desc_packed[i];
> + desc->index = i;
> + }
> +}
I see the split variant is also called to init the indirect tables.
Do you confirm this isn't needed in the case of packed ring?
> +
> /* Chain all the descriptors in the ring with an END */
> static inline void
> -vring_desc_init(struct vring_desc *dp, uint16_t n)
> +vring_desc_init_split(struct vring_desc *dp, uint16_t n)
> {
> uint16_t i;
>
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v7 2/8] net/virtio: add packed virtqueue defines
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 2/8] net/virtio: add packed virtqueue defines Jens Freimann
@ 2018-10-04 11:54 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-10-04 11:54 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
On 10/03/2018 03:11 PM, Jens Freimann wrote:
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> ---
> drivers/net/virtio/virtio_ring.h | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
> index 309069fdb..36a65f9b3 100644
> --- a/drivers/net/virtio/virtio_ring.h
> +++ b/drivers/net/virtio/virtio_ring.h
> @@ -15,7 +15,11 @@
> #define VRING_DESC_F_WRITE 2
> /* This means the buffer contains a list of buffer descriptors. */
> #define VRING_DESC_F_INDIRECT 4
> +/* This flag means the descriptor was made available by the driver */
>
Trailing new line
> +#define VRING_DESC_F_AVAIL(b) ((uint16_t)(b) << 7)
> +/* This flag means the descriptor was used by the device */
> +#define VRING_DESC_F_USED(b) ((uint16_t)(b) << 15)
> /* The Host uses this in used->flags to advise the Guest: don't kick me
> * when you add a buffer. It's unreliable, so it's simply an
> * optimization. Guest will still kick if it's out of buffers. */
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v7 4/8] net/virtio: dump packed virtqueue data
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 4/8] net/virtio: dump packed virtqueue data Jens Freimann
@ 2018-10-04 13:23 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-10-04 13:23 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
On 10/03/2018 03:11 PM, Jens Freimann wrote:
> Add support to dump packed virtqueue data to the
> VIRTQUEUE_DUMP() macro.
>
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> ---
> drivers/net/virtio/virtqueue.h | 9 +++++++++
> 1 file changed, 9 insertions(+)
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
> index b55ace958..549962c28 100644
> --- a/drivers/net/virtio/virtqueue.h
> +++ b/drivers/net/virtio/virtqueue.h
> @@ -377,6 +377,15 @@ virtqueue_notify(struct virtqueue *vq)
> uint16_t used_idx, nused; \
> used_idx = (vq)->vq_ring.used->idx; \
> nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \
> + if (vtpci_packed_queue((vq)->hw)) { \
> + PMD_INIT_LOG(DEBUG, \
> + "VQ: - size=%d; free=%d; used_cons_idx=%d; avail_idx=%d", \
> + "VQ: - avail_wrap_counter=%d; used_wrap_counter=%d", \
> + (vq)->vq_nentries, (vq)->vq_free_cnt, (vq)->vq_used_cons_idx, \
> + (vq)->vq_avail_idx, (vq)->vq_ring.avail_wrap_counter, \
> + (vq)->vq_ring.used_wrap_counter); \
> + break; \
> + } \
> PMD_INIT_LOG(DEBUG, \
> "VQ: - size=%d; free=%d; used=%d; desc_head_idx=%d;" \
> " avail.idx=%d; used_cons_idx=%d; used.idx=%d;" \
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues
2018-10-03 13:11 [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
` (8 preceding siblings ...)
2018-10-03 13:19 ` [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
@ 2018-10-04 13:59 ` Maxime Coquelin
9 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-10-04 13:59 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
Hi Jens,
On 10/03/2018 03:11 PM, Jens Freimann wrote:
> To support out-of-order processing I used the vq_desc_extra struct to
> add a .next field and use it as a list for managing descriptors. This
> seemed to add less complexity to the code than adding a new data
> structure to use as a list for packed queue descriptors.
Looking at the series, I don't see a specific path when in-order has
been negotiated. Is it intended?
Wouldn't we save some cache misses when assuming packets are processed
in-order?
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/8] net/virtio: vring init for packed queues
2018-10-04 11:54 ` Maxime Coquelin
@ 2018-10-05 8:10 ` Jens Freimann
0 siblings, 0 replies; 22+ messages in thread
From: Jens Freimann @ 2018-10-05 8:10 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: dev, tiwei.bie, Gavin.Hu
On Thu, Oct 04, 2018 at 01:54:00PM +0200, Maxime Coquelin wrote:
>On 10/03/2018 03:11 PM, Jens Freimann wrote:
>>Add and initialize descriptor data structures.
>>
>>To allow out of order processing a .next field was added to
>>struct vq_desc_extra because there is none in the packed virtqueue
>>descriptor itself. This is used to chain descriptors and process them
>>similiar to how it is handled for split virtqueues.
>>
>>Signed-off-by: Jens Freimann <jfreimann@redhat.com>
>>---
>> drivers/net/virtio/virtio_ethdev.c | 28 +++++++++------
>> drivers/net/virtio/virtio_pci.h | 8 +++++
>> drivers/net/virtio/virtio_ring.h | 55 +++++++++++++++++++++++++++---
>> drivers/net/virtio/virtqueue.h | 13 ++++++-
>> 4 files changed, 88 insertions(+), 16 deletions(-)
>>
>>diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
>>index b81df0a99..d6a1613dd 100644
>>--- a/drivers/net/virtio/virtio_ethdev.c
>>+++ b/drivers/net/virtio/virtio_ethdev.c
>>@@ -299,19 +299,27 @@ virtio_init_vring(struct virtqueue *vq)
>> PMD_INIT_FUNC_TRACE();
>>- /*
>>- * Reinitialise since virtio port might have been stopped and restarted
>>- */
>> memset(ring_mem, 0, vq->vq_ring_size);
>>- vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
>>+ vring_init(vq->hw, vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
>>+
>>+ vq->vq_free_cnt = vq->vq_nentries;
>>+ memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
>> vq->vq_used_cons_idx = 0;
>>+ vq->vq_avail_idx = 0;
>> vq->vq_desc_head_idx = 0;
>>- vq->vq_avail_idx = 0;
>> vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
>>- vq->vq_free_cnt = vq->vq_nentries;
>>- memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
>>+ if (vtpci_packed_queue(vq->hw)) {
>>+ uint16_t i;
>>+ for(i = 0; i < size - 1; i++) {
>>+ vq->vq_descx[i].next = i + 1;
>
>I would move it in a dedicated loop, and do it only if IN_ORDER hasn't
>been negotiated. Not for performance reason of course, but just to
>highlight that this extra stuff isn't needed with in-order.
makes sense, I'll change it.
>
>>+ vq->vq_ring.desc_packed[i].index = i;
>
>I would use the vring_desc_init_packed function declared below instead.
ok
>
>>+ }
>
>Trailing space.
[...]
>>diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
>>index 58fdd3d45..90204d281 100644
>>--- a/drivers/net/virtio/virtio_pci.h
>>+++ b/drivers/net/virtio/virtio_pci.h
>>@@ -113,6 +113,8 @@ struct virtnet_ctl;
>> #define VIRTIO_F_VERSION_1 32
>> #define VIRTIO_F_IOMMU_PLATFORM 33
>>+#define VIRTIO_F_RING_PACKED 34
>>+#define VIRTIO_F_IN_ORDER 35
>Isn't that feature already declared?
Yes, it is. I'll remove it here.
[...].
>> static inline void
>>-vring_init(struct vring *vr, unsigned int num, uint8_t *p,
>>+vring_init(struct virtio_hw *hw, struct vring *vr, unsigned int num, uint8_t *p,
>> unsigned long align)
>> {
>> vr->num = num;
>>+ if (vtpci_packed_queue(hw)) {
>>+ vr->desc_packed = (struct vring_desc_packed *)p;
>>+ vr->driver_event = (struct vring_packed_desc_event *)(p +
>>+ num * sizeof(struct vring_desc_packed));
>>+ vr->device_event = (struct vring_packed_desc_event *)
>>+ RTE_ALIGN_CEIL((uintptr_t)(vr->driver_event +
>>+ sizeof(struct vring_packed_desc_event)), align);
>>+ return;
>>+ }
>>+
>
>As a general comment, I would find it cleaner to have dedicated
>functions for split and packed variants, like vring_init_split,
>vring_init_packed, etc...
ok, no problem.
>> vr->desc = (struct vring_desc *) p;
>> vr->avail = (struct vring_avail *) (p +
>> num * sizeof(struct vring_desc));
>>diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
>>index 26518ed98..6a4f92b79 100644
>>--- a/drivers/net/virtio/virtqueue.h
>>+++ b/drivers/net/virtio/virtqueue.h
>>@@ -161,6 +161,7 @@ struct virtio_pmd_ctrl {
>> struct vq_desc_extra {
>> void *cookie;
>> uint16_t ndescs;
>>+ uint16_t next;
>> };
>> struct virtqueue {
>>@@ -245,9 +246,19 @@ struct virtio_tx_region {
>> __attribute__((__aligned__(16)));
>> };
>>+static inline void
>>+vring_desc_init_packed(struct vring *vr, int n)
>>+{
>>+ int i;
>>+ for (i = 0; i < n; i++) {
>>+ struct vring_desc_packed *desc = &vr->desc_packed[i];
>>+ desc->index = i;
>>+ }
>>+}
>
>I see the split variant is also called to init the indirect tables.
>Do you confirm this isn't needed in the case of packed ring?
Yes, the split variant just chains descriptors in the indirect table. For
packed virtqueues we don't need to do this according to the spec.
Thanks for the review!
regards,
Jens
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v7 5/8] net/virtio: implement transmit path for packed queues
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 5/8] net/virtio: implement transmit path for packed queues Jens Freimann
@ 2018-10-10 7:27 ` Maxime Coquelin
2018-10-10 11:43 ` Jens Freimann
2018-10-11 17:31 ` Maxime Coquelin
1 sibling, 1 reply; 22+ messages in thread
From: Maxime Coquelin @ 2018-10-10 7:27 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
On 10/03/2018 03:11 PM, Jens Freimann wrote:
> This implements the transmit path for devices with
> support for packed virtqueues.
>
> Signed-off-by: Jens Freiman <jfreimann@redhat.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 32 ++--
> drivers/net/virtio/virtio_ethdev.h | 2 +
> drivers/net/virtio/virtio_ring.h | 15 +-
> drivers/net/virtio/virtio_rxtx.c | 276 +++++++++++++++++++++++++++++
> drivers/net/virtio/virtqueue.h | 18 +-
> 5 files changed, 329 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index d6a1613dd..c65ac365c 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -390,6 +390,8 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
> vq->hw = hw;
> vq->vq_queue_index = vtpci_queue_idx;
> vq->vq_nentries = vq_size;
> + if (vtpci_packed_queue(hw))
> + vq->vq_ring.avail_wrap_counter = 1;
>
> /*
> * Reserve a memzone for vring elements
> @@ -496,16 +498,22 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
> memset(txr, 0, vq_size * sizeof(*txr));
> for (i = 0; i < vq_size; i++) {
> struct vring_desc *start_dp = txr[i].tx_indir;
> -
> - vring_desc_init_split(start_dp, RTE_DIM(txr[i].tx_indir));
> -
> + struct vring_desc_packed*start_dp_packed = txr[i].tx_indir_pq;
> +
> /* first indirect descriptor is always the tx header */
> - start_dp->addr = txvq->virtio_net_hdr_mem
> - + i * sizeof(*txr)
> - + offsetof(struct virtio_tx_region, tx_hdr);
> -
> - start_dp->len = hw->vtnet_hdr_size;
> - start_dp->flags = VRING_DESC_F_NEXT;
> + if (vtpci_packed_queue(hw)) {
No need to init desc here?
> + start_dp_packed->addr = txvq->virtio_net_hdr_mem
> + + i * sizeof(*txr)
> + + offsetof(struct virtio_tx_region, tx_hdr);
> + start_dp_packed->len = hw->vtnet_hdr_size;
> + } else {
> + vring_desc_init_split(start_dp, RTE_DIM(txr[i].tx_indir));
> + start_dp->addr = txvq->virtio_net_hdr_mem
> + + i * sizeof(*txr)
> + + offsetof(struct virtio_tx_region, tx_hdr);
> + start_dp->len = hw->vtnet_hdr_size;
> + start_dp->flags = VRING_DESC_F_NEXT;
> + }
> }
> }
>
> @@ -1344,7 +1352,11 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
> eth_dev->rx_pkt_burst = &virtio_recv_pkts;
> }
>
> - if (hw->use_inorder_tx) {
> + if (vtpci_packed_queue(hw)) {
> + PMD_INIT_LOG(INFO, "virtio: using virtio 1.1 Tx path on port %u",
> + eth_dev->data->port_id);
> + eth_dev->tx_pkt_burst = virtio_xmit_pkts_packed;
> + } else if (hw->use_inorder_tx) {
> PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
> eth_dev->data->port_id);
> eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
> diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
> index e0f80e5a4..05d355180 100644
> --- a/drivers/net/virtio/virtio_ethdev.h
> +++ b/drivers/net/virtio/virtio_ethdev.h
> @@ -82,6 +82,8 @@ uint16_t virtio_recv_mergeable_pkts_inorder(void *rx_queue,
>
> uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> uint16_t nb_pkts);
> +uint16_t virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts,
> + uint16_t nb_pkts);
>
> uint16_t virtio_xmit_pkts_inorder(void *tx_queue, struct rte_mbuf **tx_pkts,
> uint16_t nb_pkts);
> diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
> index b9e63d4d4..dbffd4dcd 100644
> --- a/drivers/net/virtio/virtio_ring.h
> +++ b/drivers/net/virtio/virtio_ring.h
> @@ -108,14 +108,25 @@ set_desc_avail(struct vring *vr, struct vring_desc_packed *desc)
> }
>
> static inline int
> -desc_is_used(struct vring_desc_packed *desc, struct vring *vr)
> +_desc_is_used(struct vring_desc_packed *desc)
> {
> uint16_t used, avail;
>
> used = !!(desc->flags & VRING_DESC_F_USED(1));
> avail = !!(desc->flags & VRING_DESC_F_AVAIL(1));
>
> - return used == avail && used == vr->used_wrap_counter;
> + return used == avail;
> +
> +}
> +
> +static inline int
> +desc_is_used(struct vring_desc_packed *desc, struct vring *vr)
> +{
> + uint16_t used;
> +
> + used = !!(desc->flags & VRING_DESC_F_USED(1));
> +
> + return _desc_is_used(desc) && used == vr->used_wrap_counter;
> }
This is not in the right patch.
> /* The standard layout for the ring is a continuous chunk of memory which
> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
> index eb891433e..4078fba8e 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -38,6 +38,7 @@
> #define VIRTIO_DUMP_PACKET(m, len) do { } while (0)
> #endif
>
> +
Remove trailing line.
I need to review the remaining, but you can work on above comments in
the mean time.
Regards,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v7 5/8] net/virtio: implement transmit path for packed queues
2018-10-10 7:27 ` Maxime Coquelin
@ 2018-10-10 11:43 ` Jens Freimann
0 siblings, 0 replies; 22+ messages in thread
From: Jens Freimann @ 2018-10-10 11:43 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: dev, tiwei.bie, Gavin.Hu
On Wed, Oct 10, 2018 at 09:27:31AM +0200, Maxime Coquelin wrote:
>
>
>On 10/03/2018 03:11 PM, Jens Freimann wrote:
>>This implements the transmit path for devices with
>>support for packed virtqueues.
>>
>>Signed-off-by: Jens Freiman <jfreimann@redhat.com>
>>---
>> drivers/net/virtio/virtio_ethdev.c | 32 ++--
>> drivers/net/virtio/virtio_ethdev.h | 2 +
>> drivers/net/virtio/virtio_ring.h | 15 +-
>> drivers/net/virtio/virtio_rxtx.c | 276 +++++++++++++++++++++++++++++
>> drivers/net/virtio/virtqueue.h | 18 +-
>> 5 files changed, 329 insertions(+), 14 deletions(-)
>>
>>diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
>>index d6a1613dd..c65ac365c 100644
>>--- a/drivers/net/virtio/virtio_ethdev.c
>>+++ b/drivers/net/virtio/virtio_ethdev.c
>>@@ -390,6 +390,8 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
>> vq->hw = hw;
>> vq->vq_queue_index = vtpci_queue_idx;
>> vq->vq_nentries = vq_size;
>>+ if (vtpci_packed_queue(hw))
>>+ vq->vq_ring.avail_wrap_counter = 1;
>> /*
>> * Reserve a memzone for vring elements
>>@@ -496,16 +498,22 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
>> memset(txr, 0, vq_size * sizeof(*txr));
>> for (i = 0; i < vq_size; i++) {
>> struct vring_desc *start_dp = txr[i].tx_indir;
>>-
>>- vring_desc_init_split(start_dp, RTE_DIM(txr[i].tx_indir));
>>-
>>+ struct vring_desc_packed*start_dp_packed = txr[i].tx_indir_pq;
>>+
>> /* first indirect descriptor is always the tx header */
>>- start_dp->addr = txvq->virtio_net_hdr_mem
>>- + i * sizeof(*txr)
>>- + offsetof(struct virtio_tx_region, tx_hdr);
>>-
>>- start_dp->len = hw->vtnet_hdr_size;
>>- start_dp->flags = VRING_DESC_F_NEXT;
>>+ if (vtpci_packed_queue(hw)) {
>No need to init desc here?
No, for split rings this is only done to chain descriptors (set the
next field). For packed rings we don't need this.
>
>>+ start_dp_packed->addr = txvq->virtio_net_hdr_mem
>>+ + i * sizeof(*txr)
>>+ + offsetof(struct virtio_tx_region, tx_hdr);
>>+ start_dp_packed->len = hw->vtnet_hdr_size;
>>+ } else {
>>+ vring_desc_init_split(start_dp, RTE_DIM(txr[i].tx_indir));
>>+ start_dp->addr = txvq->virtio_net_hdr_mem
>>+ + i * sizeof(*txr)
>>+ + offsetof(struct virtio_tx_region, tx_hdr);
>>+ start_dp->len = hw->vtnet_hdr_size;
>>+ start_dp->flags = VRING_DESC_F_NEXT;
>>+ }
>> }
>> }
>>@@ -1344,7 +1352,11 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
>> eth_dev->rx_pkt_burst = &virtio_recv_pkts;
>> }
>>- if (hw->use_inorder_tx) {
>>+ if (vtpci_packed_queue(hw)) {
>>+ PMD_INIT_LOG(INFO, "virtio: using virtio 1.1 Tx path on port %u",
>>+ eth_dev->data->port_id);
>>+ eth_dev->tx_pkt_burst = virtio_xmit_pkts_packed;
>>+ } else if (hw->use_inorder_tx) {
>> PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
>> eth_dev->data->port_id);
>> eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
>>diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
>>index e0f80e5a4..05d355180 100644
>>--- a/drivers/net/virtio/virtio_ethdev.h
>>+++ b/drivers/net/virtio/virtio_ethdev.h
>>@@ -82,6 +82,8 @@ uint16_t virtio_recv_mergeable_pkts_inorder(void *rx_queue,
>> uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
>> uint16_t nb_pkts);
>>+uint16_t virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts,
>>+ uint16_t nb_pkts);
>> uint16_t virtio_xmit_pkts_inorder(void *tx_queue, struct rte_mbuf **tx_pkts,
>> uint16_t nb_pkts);
>>diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
>>index b9e63d4d4..dbffd4dcd 100644
>>--- a/drivers/net/virtio/virtio_ring.h
>>+++ b/drivers/net/virtio/virtio_ring.h
>>@@ -108,14 +108,25 @@ set_desc_avail(struct vring *vr, struct vring_desc_packed *desc)
>> }
>> static inline int
>>-desc_is_used(struct vring_desc_packed *desc, struct vring *vr)
>>+_desc_is_used(struct vring_desc_packed *desc)
>> {
>> uint16_t used, avail;
>> used = !!(desc->flags & VRING_DESC_F_USED(1));
>> avail = !!(desc->flags & VRING_DESC_F_AVAIL(1));
>>- return used == avail && used == vr->used_wrap_counter;
>>+ return used == avail;
>>+
>>+}
>>+
>>+static inline int
>>+desc_is_used(struct vring_desc_packed *desc, struct vring *vr)
>>+{
>>+ uint16_t used;
>>+
>>+ used = !!(desc->flags & VRING_DESC_F_USED(1));
>>+
>>+ return _desc_is_used(desc) && used == vr->used_wrap_counter;
>> }
>
>This is not in the right patch.
yes, fixed.
>> /* The standard layout for the ring is a continuous chunk of memory which
>>diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
>>index eb891433e..4078fba8e 100644
>>--- a/drivers/net/virtio/virtio_rxtx.c
>>+++ b/drivers/net/virtio/virtio_rxtx.c
>>@@ -38,6 +38,7 @@
>> #define VIRTIO_DUMP_PACKET(m, len) do { } while (0)
>> #endif
>>+
>
>Remove trailing line.
>
>I need to review the remaining, but you can work on above comments in
>the mean time.
done. Thanks for the review!
regards,
Jens
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v7 5/8] net/virtio: implement transmit path for packed queues
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 5/8] net/virtio: implement transmit path for packed queues Jens Freimann
2018-10-10 7:27 ` Maxime Coquelin
@ 2018-10-11 17:31 ` Maxime Coquelin
2018-10-12 7:24 ` Jens Freimann
1 sibling, 1 reply; 22+ messages in thread
From: Maxime Coquelin @ 2018-10-11 17:31 UTC (permalink / raw)
To: Jens Freimann, dev; +Cc: tiwei.bie, Gavin.Hu
I'm testing your series, and it gets stuck after 256 packets in transmit
path. When it happens, descs flags indicate it has been made available
by the driver (desc->flags = 0x80), but it is not consistent with the
expected wrap counter value (0).
Not sure this is the root cause, but it seems below code is broken:
On 10/03/2018 03:11 PM, Jens Freimann wrote:
> +static inline void
> +virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
> + uint16_t needed, int use_indirect, int can_push,
> + int in_order)
> +{
> + struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;
> + struct vq_desc_extra *dxp, *head_dxp;
> + struct virtqueue *vq = txvq->vq;
> + struct vring_desc_packed *start_dp, *head_dp;
> + uint16_t seg_num = cookie->nb_segs;
> + uint16_t idx, head_id;
> + uint16_t head_size = vq->hw->vtnet_hdr_size;
> + struct virtio_net_hdr *hdr;
> + int wrap_counter = vq->vq_ring.avail_wrap_counter;
> +
> + head_id = vq->vq_desc_head_idx;
> + idx = head_id;
> + start_dp = vq->vq_ring.desc_packed;
> + dxp = &vq->vq_descx[idx];
> + dxp->ndescs = needed;
> +
> + head_dp = &vq->vq_ring.desc_packed[head_id];
> + head_dxp = &vq->vq_descx[head_id];
> + head_dxp->cookie = (void *) cookie;
> +
> + if (can_push) {
> + /* prepend cannot fail, checked by caller */
> + hdr = (struct virtio_net_hdr *)
> + rte_pktmbuf_prepend(cookie, head_size);
> + /* rte_pktmbuf_prepend() counts the hdr size to the pkt length,
> + * which is wrong. Below subtract restores correct pkt size.
> + */
> + cookie->pkt_len -= head_size;
> +
> + /* if offload disabled, it is not zeroed below, do it now */
> + if (!vq->hw->has_tx_offload) {
> + ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
> + ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);
> + ASSIGN_UNLESS_EQUAL(hdr->flags, 0);
> + ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);
> + ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);
> + ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);
> + }
> + } else if (use_indirect) {
> + /* setup tx ring slot to point to indirect
> + * descriptor list stored in reserved region.
> + *
> + * the first slot in indirect ring is already preset
> + * to point to the header in reserved region
> + */
> + start_dp[idx].addr = txvq->virtio_net_hdr_mem +
> + RTE_PTR_DIFF(&txr[idx].tx_indir_pq, txr);
> + start_dp[idx].len = (seg_num + 1) * sizeof(struct vring_desc_packed);
> + start_dp[idx].flags = VRING_DESC_F_INDIRECT;
> + hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;
> +
> + /* loop below will fill in rest of the indirect elements */
> + start_dp = txr[idx].tx_indir_pq;
> + idx = 1;
> + } else {
> + /* setup first tx ring slot to point to header
> + * stored in reserved region.
> + */
> + start_dp[idx].addr = txvq->virtio_net_hdr_mem +
> + RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
> + start_dp[idx].len = vq->hw->vtnet_hdr_size;
> + start_dp[idx].flags = VRING_DESC_F_NEXT;
> + start_dp[idx].flags |=
> + VRING_DESC_F_AVAIL(vq->vq_ring.avail_wrap_counter) |
> + VRING_DESC_F_USED(!vq->vq_ring.avail_wrap_counter);
> + hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;
> + idx = dxp->next;
> + }
> +
> + virtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload);
> +
> + do {
> + if (idx >= vq->vq_nentries) {
> + idx = 0;
> + vq->vq_ring.avail_wrap_counter ^= 1;
> + }
> + start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
> + start_dp[idx].len = cookie->data_len;
> + start_dp[idx].flags = cookie->next ? VRING_DESC_F_NEXT : 0;
> + start_dp[idx].flags |=
> + VRING_DESC_F_AVAIL(vq->vq_ring.avail_wrap_counter) |
> + VRING_DESC_F_USED(!vq->vq_ring.avail_wrap_counter);
> + if (use_indirect) {
> + if (++idx >= (seg_num + 1))
> + break;
> + } else {
> + dxp = &vq->vq_descx[idx];
> + idx = dxp->next;
> + }
Imagine current idx is 255, dxp->next will give idx 0, right?
In that case, for desc[0], on next iteration, the flags won't be set
available properly, as vq->vq_ring.avail_wrap_counter isn't updated.
I'm not sure how it could work like this, shouldn't dxp save the wrap
counter value in out-of-order case?
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v7 5/8] net/virtio: implement transmit path for packed queues
2018-10-11 17:31 ` Maxime Coquelin
@ 2018-10-12 7:24 ` Jens Freimann
2018-10-12 7:41 ` Maxime Coquelin
0 siblings, 1 reply; 22+ messages in thread
From: Jens Freimann @ 2018-10-12 7:24 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: dev, tiwei.bie, Gavin.Hu
On Thu, Oct 11, 2018 at 07:31:57PM +0200, Maxime Coquelin wrote:
>
>I'm testing your series, and it gets stuck after 256 packets in transmit
>path. When it happens, descs flags indicate it has been made available
>by the driver (desc->flags = 0x80), but it is not consistent with the
>expected wrap counter value (0).
>
>Not sure this is the root cause, but it seems below code is broken:
>
>On 10/03/2018 03:11 PM, Jens Freimann wrote:
[snip]
>+
>>+ do {
>>+ if (idx >= vq->vq_nentries) {
>>+ idx = 0;
>>+ vq->vq_ring.avail_wrap_counter ^= 1;
>>+ }
>>+ start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
>>+ start_dp[idx].len = cookie->data_len;
>>+ start_dp[idx].flags = cookie->next ? VRING_DESC_F_NEXT : 0;
>>+ start_dp[idx].flags |=
>>+ VRING_DESC_F_AVAIL(vq->vq_ring.avail_wrap_counter) |
>>+ VRING_DESC_F_USED(!vq->vq_ring.avail_wrap_counter);
>>+ if (use_indirect) {
>>+ if (++idx >= (seg_num + 1))
>>+ break;
>>+ } else {
>>+ dxp = &vq->vq_descx[idx];
>>+ idx = dxp->next;
>>+ }
>
>Imagine current idx is 255, dxp->next will give idx 0, right?
No it will be VQ_RING_DESC_CHAIN_END which is defined as 32768.
>In that case, for desc[0], on next iteration, the flags won't be set
>available properly, as vq->vq_ring.avail_wrap_counter isn't updated.
It will wrap because VQ_RING_DESC_CHAIN_END is > ring size.
I can't reproduce what you see. Do you test with txonly fwd mode?
regards,
Jens
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v7 5/8] net/virtio: implement transmit path for packed queues
2018-10-12 7:24 ` Jens Freimann
@ 2018-10-12 7:41 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-10-12 7:41 UTC (permalink / raw)
To: Jens Freimann; +Cc: dev, tiwei.bie, Gavin.Hu
On 10/12/2018 09:24 AM, Jens Freimann wrote:
> On Thu, Oct 11, 2018 at 07:31:57PM +0200, Maxime Coquelin wrote:
>>
>> I'm testing your series, and it gets stuck after 256 packets in transmit
>> path. When it happens, descs flags indicate it has been made available
>> by the driver (desc->flags = 0x80), but it is not consistent with the
>> expected wrap counter value (0).
>>
>> Not sure this is the root cause, but it seems below code is broken:
>
>>
>> On 10/03/2018 03:11 PM, Jens Freimann wrote:
>
> [snip]
>> +
>>> + do {
>>> + if (idx >= vq->vq_nentries) {
>>> + idx = 0;
>>> + vq->vq_ring.avail_wrap_counter ^= 1;
>>> + }
>>> + start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
>>> + start_dp[idx].len = cookie->data_len;
>>> + start_dp[idx].flags = cookie->next ? VRING_DESC_F_NEXT : 0;
>>> + start_dp[idx].flags |=
>>> + VRING_DESC_F_AVAIL(vq->vq_ring.avail_wrap_counter) |
>>> + VRING_DESC_F_USED(!vq->vq_ring.avail_wrap_counter);
>>> + if (use_indirect) {
>>> + if (++idx >= (seg_num + 1))
>>> + break;
>>> + } else {
>>> + dxp = &vq->vq_descx[idx];
>>> + idx = dxp->next;
>>> + }
>>
>> Imagine current idx is 255, dxp->next will give idx 0, right?
>
> No it will be VQ_RING_DESC_CHAIN_END which is defined as 32768.
>> In that case, for desc[0], on next iteration, the flags won't be set
>> available properly, as vq->vq_ring.avail_wrap_counter isn't updated.
>
> It will wrap because VQ_RING_DESC_CHAIN_END is > ring size.
I'm not sure to understand. I dig a bit deeper into the code and may
come back if I have questions.
> I can't reproduce what you see. Do you test with txonly fwd mode?
Yes, txonly fwd mode.
On host side, I have latest master with my 2 patches series enabling
packed ring in vhost backend and Tiwei's patch fixing notification
disablement.
For QEMU, I use the branch you shared and started it with below cmdline:
./x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 3072 -smp 3
-machine q35 -cpu host -chardev socket,id=char0,path=/tmp/vhost-user1
-netdev type=vhost-user,id=hn2,chardev=char0,vhostforce,queues=1
-device
virtio-net-pci,netdev=hn2,id=v0,mq=off,mrg_rxbuf=off,ring_packed=on,mac=52:54:00:11:22:11
-object
memory-backend-file,id=mem,size=3G,mem-path=/dev/hugepages,share=on
-numa node,memdev=mem -mem-prealloc -k fr -net
user,hostfwd=tcp::10021-:22 -net nic -serial stdio
/home/virt/rhel7.6-1.qcow2
Regards,
Maxime
> regards,
> Jens
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues
@ 2018-10-03 13:10 Jens Freimann
0 siblings, 0 replies; 22+ messages in thread
From: Jens Freimann @ 2018-10-03 13:10 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
I'm sending this out to get some comments especially on the TX path code.
I added support for mergeable rx buffers, out of order processing and
indirect. The receive path works well but the TX path sometimes locks up
after a random number of packets transmitted, so please review this
patch extra careful. This is also why I didn't add any new performance
numbers to the cover letter yet.
To support out-of-order processing I used the vq_desc_extra struct to
add a .next field and use it as a list for managing descriptors. This
seemed to add less complexity to the code than adding a new data
structure to use as a list for packed queue descriptors.
I also took out the patch for supporting virtio-user as it turned out
more complex than expected. I will try to get it working for the next
version, but if I don't can we add it in a later pach set (and then
probably not in 18.11?]
This is a basic implementation of packed virtqueues as specified in the
Virtio 1.1 draft. A compiled version of the current draft is available
at https://github.com/oasis-tcs/virtio-docs.git (or as .pdf at
https://github.com/oasis-tcs/virtio-docs/blob/master/virtio-v1.1-packed-wd10.pdf
A packed virtqueue is different from a split virtqueue in that it
consists of only a single descriptor ring that replaces available and
used ring, index and descriptor pointers.
Each descriptor is readable and writable and has a flags field. These flags
will mark if a descriptor is available or used. To detect new available descriptors
even after the ring has wrapped, device and driver each have a
single-bit wrap counter that is flipped from 0 to 1 and vice versa every time
the last descriptor in the ring is used/made available.
Jens Freimann (8):
net/virtio: vring init for packed queues
net/virtio: add packed virtqueue defines
net/virtio: add packed virtqueue helpers
net/virtio: dump packed virtqueue data
net/virtio: implement transmit path for packed queues
net/virtio: implement receive path for packed queues
net/virtio: add virtio send command packed queue support
net/virtio: enable packed virtqueues by default
drivers/net/virtio/virtio_ethdev.c | 161 +++++++--
drivers/net/virtio/virtio_ethdev.h | 5 +
drivers/net/virtio/virtio_pci.h | 8 +
drivers/net/virtio/virtio_ring.h | 96 ++++-
drivers/net/virtio/virtio_rxtx.c | 544 ++++++++++++++++++++++++++++-
drivers/net/virtio/virtqueue.c | 23 ++
drivers/net/virtio/virtqueue.h | 52 ++-
7 files changed, 846 insertions(+), 43 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues
@ 2018-10-03 13:06 Jens Freimann
0 siblings, 0 replies; 22+ messages in thread
From: Jens Freimann @ 2018-10-03 13:06 UTC (permalink / raw)
To: dev; +Cc: tiwei.bie, maxime.coquelin, Gavin.Hu
I'm sending this out to get some comments especially on the TX path code.
I added support for mergeable rx buffers, out of order processing and
indirect. The receive path works well but the TX path sometimes locks up
after a random number of packets transmitted, so please review this
patch extra careful. This is also why I didn't add any new performance
numbers to the cover letter yet.
To support out-of-order processing I used the vq_desc_extra struct to
add a .next field and use it as a list for managing descriptors. This
seemed to add less complexity to the code than adding a new data
structure to use as a list for packed queue descriptors.
I also took out the patch for supporting virtio-user as it turned out
more complex than expected. I will try to get it working for the next
version, but if I don't can we add it in a later pach set (and then
probably not in 18.11?]
This is a basic implementation of packed virtqueues as specified in the
Virtio 1.1 draft. A compiled version of the current draft is available
at https://github.com/oasis-tcs/virtio-docs.git (or as .pdf at
https://github.com/oasis-tcs/virtio-docs/blob/master/virtio-v1.1-packed-wd10.pdf
A packed virtqueue is different from a split virtqueue in that it
consists of only a single descriptor ring that replaces available and
used ring, index and descriptor pointers.
Each descriptor is readable and writable and has a flags field. These flags
will mark if a descriptor is available or used. To detect new available descriptors
even after the ring has wrapped, device and driver each have a
single-bit wrap counter that is flipped from 0 to 1 and vice versa every time
the last descriptor in the ring is used/made available.
Jens Freimann (8):
net/virtio: vring init for packed queues
net/virtio: add packed virtqueue defines
net/virtio: add packed virtqueue helpers
net/virtio: dump packed virtqueue data
net/virtio: implement transmit path for packed queues
net/virtio: implement receive path for packed queues
net/virtio: add virtio send command packed queue support
net/virtio: enable packed virtqueues by default
drivers/net/virtio/virtio_ethdev.c | 161 +++++++--
drivers/net/virtio/virtio_ethdev.h | 5 +
drivers/net/virtio/virtio_pci.h | 8 +
drivers/net/virtio/virtio_ring.h | 96 ++++-
drivers/net/virtio/virtio_rxtx.c | 544 ++++++++++++++++++++++++++++-
drivers/net/virtio/virtqueue.c | 23 ++
drivers/net/virtio/virtqueue.h | 52 ++-
7 files changed, 846 insertions(+), 43 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2018-10-12 7:41 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-03 13:11 [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 1/8] net/virtio: vring init for packed queues Jens Freimann
2018-10-04 11:54 ` Maxime Coquelin
2018-10-05 8:10 ` Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 2/8] net/virtio: add packed virtqueue defines Jens Freimann
2018-10-04 11:54 ` Maxime Coquelin
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 3/8] net/virtio: add packed virtqueue helpers Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 4/8] net/virtio: dump packed virtqueue data Jens Freimann
2018-10-04 13:23 ` Maxime Coquelin
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 5/8] net/virtio: implement transmit path for packed queues Jens Freimann
2018-10-10 7:27 ` Maxime Coquelin
2018-10-10 11:43 ` Jens Freimann
2018-10-11 17:31 ` Maxime Coquelin
2018-10-12 7:24 ` Jens Freimann
2018-10-12 7:41 ` Maxime Coquelin
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 6/8] net/virtio: implement receive " Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 7/8] net/virtio: add virtio send command packed queue support Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 8/8] net/virtio: enable packed virtqueues by default Jens Freimann
2018-10-03 13:19 ` [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
2018-10-04 13:59 ` Maxime Coquelin
-- strict thread matches above, loose matches on Subject: below --
2018-10-03 13:10 Jens Freimann
2018-10-03 13:06 Jens Freimann
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).