* [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA
@ 2022-11-30 15:56 Maxime Coquelin
2022-11-30 15:56 ` [PATCH v1 01/21] net/virtio: move CVQ code into a dedicated file Maxime Coquelin
` (21 more replies)
0 siblings, 22 replies; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
This series introduces control queue support for Vhost-vDPA
backend. This is a requirement to support multiqueue, but
be usefull for other features like RSS for example.
Since the Virtio-user layer of the Virtio PMD must handle
some control messages, like the number of queue pairs to
be used by the device, a shadow control queue is created
at Virtio-user layer.
Control messages from the regular Virtio control queue
are still dequeues and handled if needed by the Virtio-user
layer, and are then forwarded to the shadow control queue
so that the physical vDPA device can handle them.
This model is similar to the one adopted by the QEMU
project.
In order to avoid code duplication, virtqueue allocation
and control queue message sending has been factored out
of the Virtio layer to be reusable by the Virtio-user
layer.
Finally, in order to support vDPA hardware which may
support large number of queues, last patch removes the
8 queue pairs limitation by dynamically allocating
vring metadata.
The series has been tested with Nvidia Cx-6 DX NIC
with up to 16 queue pairs:
# echo 0 > /sys/bus/pci/devices/0000\:3b\:00.0/sriov_numvfs
# echo 0 > /sys/bus/pci/devices/0000\:3b\:00.1/sriov_numvfs
# modprobe vhost_vdpa
# modprobe mlx5_vdpa
# echo 1 > /sys/bus/pci/devices/0000\:3b\:00.0/sriov_numvfs
# echo 0000:3b:00.2 >/sys/bus/pci/drivers/mlx5_core/unbind
# devlink dev eswitch set pci/0000:3b:00.0 mode switchdev
# echo 0000:3b:00.2 >/sys/bus/pci/drivers/mlx5_core/bind
# vdpa dev add name vdpa0 mgmtdev pci/0000:3b:00.2 mac 00:11:22:33:44:03 max_vqp 16
# ulimit -l unlimited
# dpdk-testpmd -l 0,2,4,6 --socket-mem 1024,0 --vdev 'virtio_user0,path=/dev/vhost-vdpa-0' --no-pci -n 3 -- --nb-cores=3 -i --rxq=16 --txq=16
Maxime Coquelin (21):
net/virtio: move CVQ code into a dedicated file
net/virtio: introduce notify callback for control queue
net/virtio: virtqueue headers alloc refactoring
net/virtio: remove port ID info from Rx queue
net/virtio: remove unused fields in Tx queue struct
net/virtio: remove unused queue ID field in Rx queue
net/virtio: remove unused Port ID in control queue
net/virtio: move vring memzone to virtqueue struct
net/virtio: refactor indirect desc headers init
net/virtio: alloc Rx SW ring only if vectorized path
net/virtio: extract virtqueue init from virtio queue init
net/virtio-user: fix device starting failure handling
net/virtio-user: simplify queues setup
net/virtio-user: use proper type for number of queue pairs
net/virtio-user: get max number of queue pairs from device
net/virtio-user: allocate shadow control queue
net/virtio-user: send shadow virtqueue info to the backend
net/virtio-user: add new callback to enable control queue
net/virtio-user: forward control messages to shadow queue
net/virtio-user: advertize control VQ support with vDPA
net/virtio-user: remove max queues limitation
drivers/net/virtio/meson.build | 1 +
drivers/net/virtio/virtio.h | 6 -
drivers/net/virtio/virtio_cvq.c | 229 +++++++++
drivers/net/virtio/virtio_cvq.h | 127 +++++
drivers/net/virtio/virtio_ethdev.c | 472 +-----------------
drivers/net/virtio/virtio_rxtx.c | 47 +-
drivers/net/virtio/virtio_rxtx.h | 31 +-
drivers/net/virtio/virtio_rxtx_packed.c | 3 +-
drivers/net/virtio/virtio_rxtx_simple.c | 3 +-
drivers/net/virtio/virtio_rxtx_simple.h | 7 +-
.../net/virtio/virtio_rxtx_simple_altivec.c | 4 +-
drivers/net/virtio/virtio_rxtx_simple_neon.c | 4 +-
drivers/net/virtio/virtio_rxtx_simple_sse.c | 4 +-
drivers/net/virtio/virtio_user/vhost.h | 1 +
drivers/net/virtio/virtio_user/vhost_vdpa.c | 19 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 300 +++++++++--
.../net/virtio/virtio_user/virtio_user_dev.h | 30 +-
drivers/net/virtio/virtio_user_ethdev.c | 49 +-
drivers/net/virtio/virtqueue.c | 346 ++++++++++++-
drivers/net/virtio/virtqueue.h | 127 +----
20 files changed, 1064 insertions(+), 746 deletions(-)
create mode 100644 drivers/net/virtio/virtio_cvq.c
create mode 100644 drivers/net/virtio/virtio_cvq.h
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 01/21] net/virtio: move CVQ code into a dedicated file
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-30 7:50 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 02/21] net/virtio: introduce notify callback for control queue Maxime Coquelin
` (20 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
This patch moves Virtio control queue code into a dedicated
file, as preliminary rework to support shadow control queue
in Virtio-user.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/meson.build | 1 +
drivers/net/virtio/virtio_cvq.c | 230 +++++++++++++++++++++++++++++
drivers/net/virtio/virtio_cvq.h | 126 ++++++++++++++++
drivers/net/virtio/virtio_ethdev.c | 218 +--------------------------
drivers/net/virtio/virtio_rxtx.h | 9 --
drivers/net/virtio/virtqueue.h | 105 +------------
6 files changed, 359 insertions(+), 330 deletions(-)
create mode 100644 drivers/net/virtio/virtio_cvq.c
create mode 100644 drivers/net/virtio/virtio_cvq.h
diff --git a/drivers/net/virtio/meson.build b/drivers/net/virtio/meson.build
index d78b8278c6..0ffd77024e 100644
--- a/drivers/net/virtio/meson.build
+++ b/drivers/net/virtio/meson.build
@@ -9,6 +9,7 @@ endif
sources += files(
'virtio.c',
+ 'virtio_cvq.c',
'virtio_ethdev.c',
'virtio_pci_ethdev.c',
'virtio_pci.c',
diff --git a/drivers/net/virtio/virtio_cvq.c b/drivers/net/virtio/virtio_cvq.c
new file mode 100644
index 0000000000..de4299a2a7
--- /dev/null
+++ b/drivers/net/virtio/virtio_cvq.c
@@ -0,0 +1,230 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2016 Intel Corporation
+ * Copyright(c) 2022 Red Hat Inc,
+ */
+
+#include <unistd.h>
+
+#include <rte_common.h>
+#include <rte_eal.h>
+#include <rte_errno.h>
+
+#include "virtio_cvq.h"
+#include "virtqueue.h"
+
+static struct virtio_pmd_ctrl *
+virtio_send_command_packed(struct virtnet_ctl *cvq,
+ struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int pkt_num)
+{
+ struct virtqueue *vq = virtnet_cq_to_vq(cvq);
+ int head;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc;
+ struct virtio_pmd_ctrl *result;
+ uint16_t flags;
+ int sum = 0;
+ int nb_descs = 0;
+ int k;
+
+ /*
+ * Format is enforced in qemu code:
+ * One TX packet for header;
+ * At least one TX packet per argument;
+ * One RX packet for ACK.
+ */
+ head = vq->vq_avail_idx;
+ flags = vq->vq_packed.cached_flags;
+ desc[head].addr = cvq->virtio_net_hdr_mem;
+ desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ for (k = 0; k < pkt_num; k++) {
+ desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
+ + sizeof(struct virtio_net_ctrl_hdr)
+ + sizeof(ctrl->status) + sizeof(uint8_t) * sum;
+ desc[vq->vq_avail_idx].len = dlen[k];
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
+ vq->vq_packed.cached_flags;
+ sum += dlen[k];
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^=
+ VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+ }
+
+ desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
+ + sizeof(struct virtio_net_ctrl_hdr);
+ desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
+ vq->vq_packed.cached_flags;
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ virtqueue_store_flags_packed(&desc[head], VRING_DESC_F_NEXT | flags,
+ vq->hw->weak_barriers);
+
+ virtio_wmb(vq->hw->weak_barriers);
+ virtqueue_notify(vq);
+
+ /* wait for used desc in virtqueue
+ * desc_is_used has a load-acquire or rte_io_rmb inside
+ */
+ while (!desc_is_used(&desc[head], vq))
+ usleep(100);
+
+ /* now get used descriptors */
+ vq->vq_free_cnt += nb_descs;
+ vq->vq_used_cons_idx += nb_descs;
+ if (vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx -= vq->vq_nentries;
+ vq->vq_packed.used_wrap_counter ^= 1;
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\n"
+ "vq->vq_avail_idx=%d\n"
+ "vq->vq_used_cons_idx=%d\n"
+ "vq->vq_packed.cached_flags=0x%x\n"
+ "vq->vq_packed.used_wrap_counter=%d",
+ vq->vq_free_cnt,
+ vq->vq_avail_idx,
+ vq->vq_used_cons_idx,
+ vq->vq_packed.cached_flags,
+ vq->vq_packed.used_wrap_counter);
+
+ result = cvq->virtio_net_hdr_mz->addr;
+ return result;
+}
+
+static struct virtio_pmd_ctrl *
+virtio_send_command_split(struct virtnet_ctl *cvq,
+ struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int pkt_num)
+{
+ struct virtio_pmd_ctrl *result;
+ struct virtqueue *vq = virtnet_cq_to_vq(cvq);
+ uint32_t head, i;
+ int k, sum = 0;
+
+ head = vq->vq_desc_head_idx;
+
+ /*
+ * Format is enforced in qemu code:
+ * One TX packet for header;
+ * At least one TX packet per argument;
+ * One RX packet for ACK.
+ */
+ vq->vq_split.ring.desc[head].flags = VRING_DESC_F_NEXT;
+ vq->vq_split.ring.desc[head].addr = cvq->virtio_net_hdr_mem;
+ vq->vq_split.ring.desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
+ vq->vq_free_cnt--;
+ i = vq->vq_split.ring.desc[head].next;
+
+ for (k = 0; k < pkt_num; k++) {
+ vq->vq_split.ring.desc[i].flags = VRING_DESC_F_NEXT;
+ vq->vq_split.ring.desc[i].addr = cvq->virtio_net_hdr_mem
+ + sizeof(struct virtio_net_ctrl_hdr)
+ + sizeof(ctrl->status) + sizeof(uint8_t) * sum;
+ vq->vq_split.ring.desc[i].len = dlen[k];
+ sum += dlen[k];
+ vq->vq_free_cnt--;
+ i = vq->vq_split.ring.desc[i].next;
+ }
+
+ vq->vq_split.ring.desc[i].flags = VRING_DESC_F_WRITE;
+ vq->vq_split.ring.desc[i].addr = cvq->virtio_net_hdr_mem
+ + sizeof(struct virtio_net_ctrl_hdr);
+ vq->vq_split.ring.desc[i].len = sizeof(ctrl->status);
+ vq->vq_free_cnt--;
+
+ vq->vq_desc_head_idx = vq->vq_split.ring.desc[i].next;
+
+ vq_update_avail_ring(vq, head);
+ vq_update_avail_idx(vq);
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_queue_index = %d", vq->vq_queue_index);
+
+ virtqueue_notify(vq);
+
+ while (virtqueue_nused(vq) == 0)
+ usleep(100);
+
+ while (virtqueue_nused(vq)) {
+ uint32_t idx, desc_idx, used_idx;
+ struct vring_used_elem *uep;
+
+ used_idx = (uint32_t)(vq->vq_used_cons_idx
+ & (vq->vq_nentries - 1));
+ uep = &vq->vq_split.ring.used->ring[used_idx];
+ idx = (uint32_t)uep->id;
+ desc_idx = idx;
+
+ while (vq->vq_split.ring.desc[desc_idx].flags &
+ VRING_DESC_F_NEXT) {
+ desc_idx = vq->vq_split.ring.desc[desc_idx].next;
+ vq->vq_free_cnt++;
+ }
+
+ vq->vq_split.ring.desc[desc_idx].next = vq->vq_desc_head_idx;
+ vq->vq_desc_head_idx = idx;
+
+ vq->vq_used_cons_idx++;
+ vq->vq_free_cnt++;
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\nvq->vq_desc_head_idx=%d",
+ vq->vq_free_cnt, vq->vq_desc_head_idx);
+
+ result = cvq->virtio_net_hdr_mz->addr;
+ return result;
+}
+
+int
+virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl, int *dlen, int pkt_num)
+{
+ virtio_net_ctrl_ack status = ~0;
+ struct virtio_pmd_ctrl *result;
+ struct virtqueue *vq;
+
+ ctrl->status = status;
+
+ if (!cvq) {
+ PMD_INIT_LOG(ERR, "Control queue is not supported.");
+ return -1;
+ }
+
+ rte_spinlock_lock(&cvq->lock);
+ vq = virtnet_cq_to_vq(cvq);
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_desc_head_idx = %d, status = %d, "
+ "vq->hw->cvq = %p vq = %p",
+ vq->vq_desc_head_idx, status, vq->hw->cvq, vq);
+
+ if (vq->vq_free_cnt < pkt_num + 2 || pkt_num < 1) {
+ rte_spinlock_unlock(&cvq->lock);
+ return -1;
+ }
+
+ memcpy(cvq->virtio_net_hdr_mz->addr, ctrl,
+ sizeof(struct virtio_pmd_ctrl));
+
+ if (virtio_with_packed_queue(vq->hw))
+ result = virtio_send_command_packed(cvq, ctrl, dlen, pkt_num);
+ else
+ result = virtio_send_command_split(cvq, ctrl, dlen, pkt_num);
+
+ rte_spinlock_unlock(&cvq->lock);
+ return result->status;
+}
+
diff --git a/drivers/net/virtio/virtio_cvq.h b/drivers/net/virtio/virtio_cvq.h
new file mode 100644
index 0000000000..139e813ffb
--- /dev/null
+++ b/drivers/net/virtio/virtio_cvq.h
@@ -0,0 +1,126 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2015 Intel Corporation
+ */
+
+#ifndef _VIRTIO_CVQ_H_
+#define _VIRTIO_CVQ_H_
+
+#include <rte_ether.h>
+
+/**
+ * Control the RX mode, ie. promiscuous, allmulti, etc...
+ * All commands require an "out" sg entry containing a 1 byte
+ * state value, zero = disable, non-zero = enable. Commands
+ * 0 and 1 are supported with the VIRTIO_NET_F_CTRL_RX feature.
+ * Commands 2-5 are added with VIRTIO_NET_F_CTRL_RX_EXTRA.
+ */
+#define VIRTIO_NET_CTRL_RX 0
+#define VIRTIO_NET_CTRL_RX_PROMISC 0
+#define VIRTIO_NET_CTRL_RX_ALLMULTI 1
+#define VIRTIO_NET_CTRL_RX_ALLUNI 2
+#define VIRTIO_NET_CTRL_RX_NOMULTI 3
+#define VIRTIO_NET_CTRL_RX_NOUNI 4
+#define VIRTIO_NET_CTRL_RX_NOBCAST 5
+
+/**
+ * Control the MAC
+ *
+ * The MAC filter table is managed by the hypervisor, the guest should
+ * assume the size is infinite. Filtering should be considered
+ * non-perfect, ie. based on hypervisor resources, the guest may
+ * received packets from sources not specified in the filter list.
+ *
+ * In addition to the class/cmd header, the TABLE_SET command requires
+ * two out scatterlists. Each contains a 4 byte count of entries followed
+ * by a concatenated byte stream of the ETH_ALEN MAC addresses. The
+ * first sg list contains unicast addresses, the second is for multicast.
+ * This functionality is present if the VIRTIO_NET_F_CTRL_RX feature
+ * is available.
+ *
+ * The ADDR_SET command requests one out scatterlist, it contains a
+ * 6 bytes MAC address. This functionality is present if the
+ * VIRTIO_NET_F_CTRL_MAC_ADDR feature is available.
+ */
+struct virtio_net_ctrl_mac {
+ uint32_t entries;
+ uint8_t macs[][RTE_ETHER_ADDR_LEN];
+} __rte_packed;
+
+#define VIRTIO_NET_CTRL_MAC 1
+#define VIRTIO_NET_CTRL_MAC_TABLE_SET 0
+#define VIRTIO_NET_CTRL_MAC_ADDR_SET 1
+
+/**
+ * Control VLAN filtering
+ *
+ * The VLAN filter table is controlled via a simple ADD/DEL interface.
+ * VLAN IDs not added may be filtered by the hypervisor. Del is the
+ * opposite of add. Both commands expect an out entry containing a 2
+ * byte VLAN ID. VLAN filtering is available with the
+ * VIRTIO_NET_F_CTRL_VLAN feature bit.
+ */
+#define VIRTIO_NET_CTRL_VLAN 2
+#define VIRTIO_NET_CTRL_VLAN_ADD 0
+#define VIRTIO_NET_CTRL_VLAN_DEL 1
+
+/**
+ * RSS control
+ *
+ * The RSS feature configuration message is sent by the driver when
+ * VIRTIO_NET_F_RSS has been negotiated. It provides the device with
+ * hash types to use, hash key and indirection table. In this
+ * implementation, the driver only supports fixed key length (40B)
+ * and indirection table size (128 entries).
+ */
+#define VIRTIO_NET_RSS_RETA_SIZE 128
+#define VIRTIO_NET_RSS_KEY_SIZE 40
+
+struct virtio_net_ctrl_rss {
+ uint32_t hash_types;
+ uint16_t indirection_table_mask;
+ uint16_t unclassified_queue;
+ uint16_t indirection_table[VIRTIO_NET_RSS_RETA_SIZE];
+ uint16_t max_tx_vq;
+ uint8_t hash_key_length;
+ uint8_t hash_key_data[VIRTIO_NET_RSS_KEY_SIZE];
+};
+
+/*
+ * Control link announce acknowledgment
+ *
+ * The command VIRTIO_NET_CTRL_ANNOUNCE_ACK is used to indicate that
+ * driver has received the notification; device would clear the
+ * VIRTIO_NET_S_ANNOUNCE bit in the status field after it receives
+ * this command.
+ */
+#define VIRTIO_NET_CTRL_ANNOUNCE 3
+#define VIRTIO_NET_CTRL_ANNOUNCE_ACK 0
+
+struct virtio_net_ctrl_hdr {
+ uint8_t class;
+ uint8_t cmd;
+} __rte_packed;
+
+typedef uint8_t virtio_net_ctrl_ack;
+
+struct virtnet_ctl {
+ /**< memzone to populate hdr. */
+ const struct rte_memzone *virtio_net_hdr_mz;
+ rte_iova_t virtio_net_hdr_mem; /**< hdr for each xmit packet */
+ uint16_t port_id; /**< Device port identifier. */
+ const struct rte_memzone *mz; /**< mem zone to populate CTL ring. */
+ rte_spinlock_t lock; /**< spinlock for control queue. */
+};
+
+#define VIRTIO_MAX_CTRL_DATA 2048
+
+struct virtio_pmd_ctrl {
+ struct virtio_net_ctrl_hdr hdr;
+ virtio_net_ctrl_ack status;
+ uint8_t data[VIRTIO_MAX_CTRL_DATA];
+};
+
+int
+virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl, int *dlen, int pkt_num);
+
+#endif /* _VIRTIO_RXTX_H_ */
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 760ba4e368..d553f89a0d 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -33,6 +33,7 @@
#include "virtio.h"
#include "virtio_logs.h"
#include "virtqueue.h"
+#include "virtio_cvq.h"
#include "virtio_rxtx.h"
#include "virtio_rxtx_simple.h"
#include "virtio_user/virtio_user_dev.h"
@@ -142,223 +143,6 @@ static const struct rte_virtio_xstats_name_off rte_virtio_txq_stat_strings[] = {
struct virtio_hw_internal virtio_hw_internal[RTE_MAX_ETHPORTS];
-static struct virtio_pmd_ctrl *
-virtio_send_command_packed(struct virtnet_ctl *cvq,
- struct virtio_pmd_ctrl *ctrl,
- int *dlen, int pkt_num)
-{
- struct virtqueue *vq = virtnet_cq_to_vq(cvq);
- int head;
- struct vring_packed_desc *desc = vq->vq_packed.ring.desc;
- struct virtio_pmd_ctrl *result;
- uint16_t flags;
- int sum = 0;
- int nb_descs = 0;
- int k;
-
- /*
- * Format is enforced in qemu code:
- * One TX packet for header;
- * At least one TX packet per argument;
- * One RX packet for ACK.
- */
- head = vq->vq_avail_idx;
- flags = vq->vq_packed.cached_flags;
- desc[head].addr = cvq->virtio_net_hdr_mem;
- desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
- vq->vq_free_cnt--;
- nb_descs++;
- if (++vq->vq_avail_idx >= vq->vq_nentries) {
- vq->vq_avail_idx -= vq->vq_nentries;
- vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
- }
-
- for (k = 0; k < pkt_num; k++) {
- desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
- + sizeof(struct virtio_net_ctrl_hdr)
- + sizeof(ctrl->status) + sizeof(uint8_t) * sum;
- desc[vq->vq_avail_idx].len = dlen[k];
- desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
- vq->vq_packed.cached_flags;
- sum += dlen[k];
- vq->vq_free_cnt--;
- nb_descs++;
- if (++vq->vq_avail_idx >= vq->vq_nentries) {
- vq->vq_avail_idx -= vq->vq_nentries;
- vq->vq_packed.cached_flags ^=
- VRING_PACKED_DESC_F_AVAIL_USED;
- }
- }
-
- desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
- + sizeof(struct virtio_net_ctrl_hdr);
- desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
- desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
- vq->vq_packed.cached_flags;
- vq->vq_free_cnt--;
- nb_descs++;
- if (++vq->vq_avail_idx >= vq->vq_nentries) {
- vq->vq_avail_idx -= vq->vq_nentries;
- vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
- }
-
- virtqueue_store_flags_packed(&desc[head], VRING_DESC_F_NEXT | flags,
- vq->hw->weak_barriers);
-
- virtio_wmb(vq->hw->weak_barriers);
- virtqueue_notify(vq);
-
- /* wait for used desc in virtqueue
- * desc_is_used has a load-acquire or rte_io_rmb inside
- */
- while (!desc_is_used(&desc[head], vq))
- usleep(100);
-
- /* now get used descriptors */
- vq->vq_free_cnt += nb_descs;
- vq->vq_used_cons_idx += nb_descs;
- if (vq->vq_used_cons_idx >= vq->vq_nentries) {
- vq->vq_used_cons_idx -= vq->vq_nentries;
- vq->vq_packed.used_wrap_counter ^= 1;
- }
-
- PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\n"
- "vq->vq_avail_idx=%d\n"
- "vq->vq_used_cons_idx=%d\n"
- "vq->vq_packed.cached_flags=0x%x\n"
- "vq->vq_packed.used_wrap_counter=%d",
- vq->vq_free_cnt,
- vq->vq_avail_idx,
- vq->vq_used_cons_idx,
- vq->vq_packed.cached_flags,
- vq->vq_packed.used_wrap_counter);
-
- result = cvq->virtio_net_hdr_mz->addr;
- return result;
-}
-
-static struct virtio_pmd_ctrl *
-virtio_send_command_split(struct virtnet_ctl *cvq,
- struct virtio_pmd_ctrl *ctrl,
- int *dlen, int pkt_num)
-{
- struct virtio_pmd_ctrl *result;
- struct virtqueue *vq = virtnet_cq_to_vq(cvq);
- uint32_t head, i;
- int k, sum = 0;
-
- head = vq->vq_desc_head_idx;
-
- /*
- * Format is enforced in qemu code:
- * One TX packet for header;
- * At least one TX packet per argument;
- * One RX packet for ACK.
- */
- vq->vq_split.ring.desc[head].flags = VRING_DESC_F_NEXT;
- vq->vq_split.ring.desc[head].addr = cvq->virtio_net_hdr_mem;
- vq->vq_split.ring.desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
- vq->vq_free_cnt--;
- i = vq->vq_split.ring.desc[head].next;
-
- for (k = 0; k < pkt_num; k++) {
- vq->vq_split.ring.desc[i].flags = VRING_DESC_F_NEXT;
- vq->vq_split.ring.desc[i].addr = cvq->virtio_net_hdr_mem
- + sizeof(struct virtio_net_ctrl_hdr)
- + sizeof(ctrl->status) + sizeof(uint8_t)*sum;
- vq->vq_split.ring.desc[i].len = dlen[k];
- sum += dlen[k];
- vq->vq_free_cnt--;
- i = vq->vq_split.ring.desc[i].next;
- }
-
- vq->vq_split.ring.desc[i].flags = VRING_DESC_F_WRITE;
- vq->vq_split.ring.desc[i].addr = cvq->virtio_net_hdr_mem
- + sizeof(struct virtio_net_ctrl_hdr);
- vq->vq_split.ring.desc[i].len = sizeof(ctrl->status);
- vq->vq_free_cnt--;
-
- vq->vq_desc_head_idx = vq->vq_split.ring.desc[i].next;
-
- vq_update_avail_ring(vq, head);
- vq_update_avail_idx(vq);
-
- PMD_INIT_LOG(DEBUG, "vq->vq_queue_index = %d", vq->vq_queue_index);
-
- virtqueue_notify(vq);
-
- while (virtqueue_nused(vq) == 0)
- usleep(100);
-
- while (virtqueue_nused(vq)) {
- uint32_t idx, desc_idx, used_idx;
- struct vring_used_elem *uep;
-
- used_idx = (uint32_t)(vq->vq_used_cons_idx
- & (vq->vq_nentries - 1));
- uep = &vq->vq_split.ring.used->ring[used_idx];
- idx = (uint32_t) uep->id;
- desc_idx = idx;
-
- while (vq->vq_split.ring.desc[desc_idx].flags &
- VRING_DESC_F_NEXT) {
- desc_idx = vq->vq_split.ring.desc[desc_idx].next;
- vq->vq_free_cnt++;
- }
-
- vq->vq_split.ring.desc[desc_idx].next = vq->vq_desc_head_idx;
- vq->vq_desc_head_idx = idx;
-
- vq->vq_used_cons_idx++;
- vq->vq_free_cnt++;
- }
-
- PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\nvq->vq_desc_head_idx=%d",
- vq->vq_free_cnt, vq->vq_desc_head_idx);
-
- result = cvq->virtio_net_hdr_mz->addr;
- return result;
-}
-
-static int
-virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
- int *dlen, int pkt_num)
-{
- virtio_net_ctrl_ack status = ~0;
- struct virtio_pmd_ctrl *result;
- struct virtqueue *vq;
-
- ctrl->status = status;
-
- if (!cvq) {
- PMD_INIT_LOG(ERR, "Control queue is not supported.");
- return -1;
- }
-
- rte_spinlock_lock(&cvq->lock);
- vq = virtnet_cq_to_vq(cvq);
-
- PMD_INIT_LOG(DEBUG, "vq->vq_desc_head_idx = %d, status = %d, "
- "vq->hw->cvq = %p vq = %p",
- vq->vq_desc_head_idx, status, vq->hw->cvq, vq);
-
- if (vq->vq_free_cnt < pkt_num + 2 || pkt_num < 1) {
- rte_spinlock_unlock(&cvq->lock);
- return -1;
- }
-
- memcpy(cvq->virtio_net_hdr_mz->addr, ctrl,
- sizeof(struct virtio_pmd_ctrl));
-
- if (virtio_with_packed_queue(vq->hw))
- result = virtio_send_command_packed(cvq, ctrl, dlen, pkt_num);
- else
- result = virtio_send_command_split(cvq, ctrl, dlen, pkt_num);
-
- rte_spinlock_unlock(&cvq->lock);
- return result->status;
-}
-
static int
virtio_set_multiple_queues_rss(struct rte_eth_dev *dev, uint16_t nb_queues)
{
diff --git a/drivers/net/virtio/virtio_rxtx.h b/drivers/net/virtio/virtio_rxtx.h
index 6ce5d67d15..6ee3a13100 100644
--- a/drivers/net/virtio/virtio_rxtx.h
+++ b/drivers/net/virtio/virtio_rxtx.h
@@ -46,15 +46,6 @@ struct virtnet_tx {
const struct rte_memzone *mz; /**< mem zone to populate TX ring. */
};
-struct virtnet_ctl {
- /**< memzone to populate hdr. */
- const struct rte_memzone *virtio_net_hdr_mz;
- rte_iova_t virtio_net_hdr_mem; /**< hdr for each xmit packet */
- uint16_t port_id; /**< Device port identifier. */
- const struct rte_memzone *mz; /**< mem zone to populate CTL ring. */
- rte_spinlock_t lock; /**< spinlock for control queue. */
-};
-
int virtio_rxq_vec_setup(struct virtnet_rx *rxvq);
void virtio_update_packet_stats(struct virtnet_stats *stats,
struct rte_mbuf *mbuf);
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index f5d8b40cad..62f472850e 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -16,6 +16,7 @@
#include "virtio_ring.h"
#include "virtio_logs.h"
#include "virtio_rxtx.h"
+#include "virtio_cvq.h"
struct rte_mbuf;
@@ -145,113 +146,9 @@ enum { VTNET_RQ = 0, VTNET_TQ = 1, VTNET_CQ = 2 };
*/
#define VQ_RING_DESC_CHAIN_END 32768
-/**
- * Control the RX mode, ie. promiscuous, allmulti, etc...
- * All commands require an "out" sg entry containing a 1 byte
- * state value, zero = disable, non-zero = enable. Commands
- * 0 and 1 are supported with the VIRTIO_NET_F_CTRL_RX feature.
- * Commands 2-5 are added with VIRTIO_NET_F_CTRL_RX_EXTRA.
- */
-#define VIRTIO_NET_CTRL_RX 0
-#define VIRTIO_NET_CTRL_RX_PROMISC 0
-#define VIRTIO_NET_CTRL_RX_ALLMULTI 1
-#define VIRTIO_NET_CTRL_RX_ALLUNI 2
-#define VIRTIO_NET_CTRL_RX_NOMULTI 3
-#define VIRTIO_NET_CTRL_RX_NOUNI 4
-#define VIRTIO_NET_CTRL_RX_NOBCAST 5
-
-/**
- * Control the MAC
- *
- * The MAC filter table is managed by the hypervisor, the guest should
- * assume the size is infinite. Filtering should be considered
- * non-perfect, ie. based on hypervisor resources, the guest may
- * received packets from sources not specified in the filter list.
- *
- * In addition to the class/cmd header, the TABLE_SET command requires
- * two out scatterlists. Each contains a 4 byte count of entries followed
- * by a concatenated byte stream of the ETH_ALEN MAC addresses. The
- * first sg list contains unicast addresses, the second is for multicast.
- * This functionality is present if the VIRTIO_NET_F_CTRL_RX feature
- * is available.
- *
- * The ADDR_SET command requests one out scatterlist, it contains a
- * 6 bytes MAC address. This functionality is present if the
- * VIRTIO_NET_F_CTRL_MAC_ADDR feature is available.
- */
-struct virtio_net_ctrl_mac {
- uint32_t entries;
- uint8_t macs[][RTE_ETHER_ADDR_LEN];
-} __rte_packed;
-
-#define VIRTIO_NET_CTRL_MAC 1
-#define VIRTIO_NET_CTRL_MAC_TABLE_SET 0
-#define VIRTIO_NET_CTRL_MAC_ADDR_SET 1
-
-/**
- * Control VLAN filtering
- *
- * The VLAN filter table is controlled via a simple ADD/DEL interface.
- * VLAN IDs not added may be filtered by the hypervisor. Del is the
- * opposite of add. Both commands expect an out entry containing a 2
- * byte VLAN ID. VLAN filtering is available with the
- * VIRTIO_NET_F_CTRL_VLAN feature bit.
- */
-#define VIRTIO_NET_CTRL_VLAN 2
-#define VIRTIO_NET_CTRL_VLAN_ADD 0
-#define VIRTIO_NET_CTRL_VLAN_DEL 1
-
-/**
- * RSS control
- *
- * The RSS feature configuration message is sent by the driver when
- * VIRTIO_NET_F_RSS has been negotiated. It provides the device with
- * hash types to use, hash key and indirection table. In this
- * implementation, the driver only supports fixed key length (40B)
- * and indirection table size (128 entries).
- */
-#define VIRTIO_NET_RSS_RETA_SIZE 128
-#define VIRTIO_NET_RSS_KEY_SIZE 40
-
-struct virtio_net_ctrl_rss {
- uint32_t hash_types;
- uint16_t indirection_table_mask;
- uint16_t unclassified_queue;
- uint16_t indirection_table[VIRTIO_NET_RSS_RETA_SIZE];
- uint16_t max_tx_vq;
- uint8_t hash_key_length;
- uint8_t hash_key_data[VIRTIO_NET_RSS_KEY_SIZE];
-};
-
-/*
- * Control link announce acknowledgement
- *
- * The command VIRTIO_NET_CTRL_ANNOUNCE_ACK is used to indicate that
- * driver has received the notification; device would clear the
- * VIRTIO_NET_S_ANNOUNCE bit in the status field after it receives
- * this command.
- */
-#define VIRTIO_NET_CTRL_ANNOUNCE 3
-#define VIRTIO_NET_CTRL_ANNOUNCE_ACK 0
-
-struct virtio_net_ctrl_hdr {
- uint8_t class;
- uint8_t cmd;
-} __rte_packed;
-
-typedef uint8_t virtio_net_ctrl_ack;
-
#define VIRTIO_NET_OK 0
#define VIRTIO_NET_ERR 1
-#define VIRTIO_MAX_CTRL_DATA 2048
-
-struct virtio_pmd_ctrl {
- struct virtio_net_ctrl_hdr hdr;
- virtio_net_ctrl_ack status;
- uint8_t data[VIRTIO_MAX_CTRL_DATA];
-};
-
struct vq_desc_extra {
void *cookie;
uint16_t ndescs;
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 02/21] net/virtio: introduce notify callback for control queue
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
2022-11-30 15:56 ` [PATCH v1 01/21] net/virtio: move CVQ code into a dedicated file Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-30 7:51 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 03/21] net/virtio: virtqueue headers alloc refactoring Maxime Coquelin
` (19 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
This patch introduces a notification callback for the control
virtqueue as preliminary work to add shadow control virtqueue
support.
This new callback is required so that the shadow control queue
implemented in Virtio-user does not call the notifciation op
implemented for the driver layer.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_cvq.c | 4 ++--
drivers/net/virtio/virtio_cvq.h | 4 ++++
drivers/net/virtio/virtio_ethdev.c | 7 +++++++
3 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/net/virtio/virtio_cvq.c b/drivers/net/virtio/virtio_cvq.c
index de4299a2a7..cd25614df8 100644
--- a/drivers/net/virtio/virtio_cvq.c
+++ b/drivers/net/virtio/virtio_cvq.c
@@ -76,7 +76,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
vq->hw->weak_barriers);
virtio_wmb(vq->hw->weak_barriers);
- virtqueue_notify(vq);
+ cvq->notify_queue(vq, cvq->notify_cookie);
/* wait for used desc in virtqueue
* desc_is_used has a load-acquire or rte_io_rmb inside
@@ -155,7 +155,7 @@ virtio_send_command_split(struct virtnet_ctl *cvq,
PMD_INIT_LOG(DEBUG, "vq->vq_queue_index = %d", vq->vq_queue_index);
- virtqueue_notify(vq);
+ cvq->notify_queue(vq, cvq->notify_cookie);
while (virtqueue_nused(vq) == 0)
usleep(100);
diff --git a/drivers/net/virtio/virtio_cvq.h b/drivers/net/virtio/virtio_cvq.h
index 139e813ffb..224dc81422 100644
--- a/drivers/net/virtio/virtio_cvq.h
+++ b/drivers/net/virtio/virtio_cvq.h
@@ -7,6 +7,8 @@
#include <rte_ether.h>
+struct virtqueue;
+
/**
* Control the RX mode, ie. promiscuous, allmulti, etc...
* All commands require an "out" sg entry containing a 1 byte
@@ -110,6 +112,8 @@ struct virtnet_ctl {
uint16_t port_id; /**< Device port identifier. */
const struct rte_memzone *mz; /**< mem zone to populate CTL ring. */
rte_spinlock_t lock; /**< spinlock for control queue. */
+ void (*notify_queue)(struct virtqueue *vq, void *cookie); /**< notify ops. */
+ void *notify_cookie; /**< cookie for notify ops */
};
#define VIRTIO_MAX_CTRL_DATA 2048
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index d553f89a0d..8db8771f4d 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -253,6 +253,12 @@ virtio_init_vring(struct virtqueue *vq)
virtqueue_disable_intr(vq);
}
+static void
+virtio_control_queue_notify(struct virtqueue *vq, __rte_unused void *cookie)
+{
+ virtqueue_notify(vq);
+}
+
static int
virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
{
@@ -421,6 +427,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
memset(cvq->virtio_net_hdr_mz->addr, 0, rte_mem_page_size());
hw->cvq = cvq;
+ vq->cq.notify_queue = &virtio_control_queue_notify;
}
if (hw->use_va)
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 03/21] net/virtio: virtqueue headers alloc refactoring
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
2022-11-30 15:56 ` [PATCH v1 01/21] net/virtio: move CVQ code into a dedicated file Maxime Coquelin
2022-11-30 15:56 ` [PATCH v1 02/21] net/virtio: introduce notify callback for control queue Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-30 7:51 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 04/21] net/virtio: remove port ID info from Rx queue Maxime Coquelin
` (18 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
This patch refactors virtqueue initialization by moving
its headers allocation and deallocation in dedicated
function.
While at it, it renames the memzone metadata and address
pointers in the virtnet_tx and virtnet_ctl structures to
remove redundant virtio_net_ prefix.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_cvq.c | 19 ++--
drivers/net/virtio/virtio_cvq.h | 9 +-
drivers/net/virtio/virtio_ethdev.c | 149 ++++++++++++++++++-----------
drivers/net/virtio/virtio_rxtx.c | 12 +--
drivers/net/virtio/virtio_rxtx.h | 12 +--
drivers/net/virtio/virtqueue.c | 8 +-
drivers/net/virtio/virtqueue.h | 13 +--
7 files changed, 126 insertions(+), 96 deletions(-)
diff --git a/drivers/net/virtio/virtio_cvq.c b/drivers/net/virtio/virtio_cvq.c
index cd25614df8..5e457f5fd0 100644
--- a/drivers/net/virtio/virtio_cvq.c
+++ b/drivers/net/virtio/virtio_cvq.c
@@ -34,7 +34,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
*/
head = vq->vq_avail_idx;
flags = vq->vq_packed.cached_flags;
- desc[head].addr = cvq->virtio_net_hdr_mem;
+ desc[head].addr = cvq->hdr_mem;
desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
vq->vq_free_cnt--;
nb_descs++;
@@ -44,7 +44,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
}
for (k = 0; k < pkt_num; k++) {
- desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
+ desc[vq->vq_avail_idx].addr = cvq->hdr_mem
+ sizeof(struct virtio_net_ctrl_hdr)
+ sizeof(ctrl->status) + sizeof(uint8_t) * sum;
desc[vq->vq_avail_idx].len = dlen[k];
@@ -60,7 +60,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
}
}
- desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
+ desc[vq->vq_avail_idx].addr = cvq->hdr_mem
+ sizeof(struct virtio_net_ctrl_hdr);
desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
@@ -103,7 +103,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
vq->vq_packed.cached_flags,
vq->vq_packed.used_wrap_counter);
- result = cvq->virtio_net_hdr_mz->addr;
+ result = cvq->hdr_mz->addr;
return result;
}
@@ -126,14 +126,14 @@ virtio_send_command_split(struct virtnet_ctl *cvq,
* One RX packet for ACK.
*/
vq->vq_split.ring.desc[head].flags = VRING_DESC_F_NEXT;
- vq->vq_split.ring.desc[head].addr = cvq->virtio_net_hdr_mem;
+ vq->vq_split.ring.desc[head].addr = cvq->hdr_mem;
vq->vq_split.ring.desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
vq->vq_free_cnt--;
i = vq->vq_split.ring.desc[head].next;
for (k = 0; k < pkt_num; k++) {
vq->vq_split.ring.desc[i].flags = VRING_DESC_F_NEXT;
- vq->vq_split.ring.desc[i].addr = cvq->virtio_net_hdr_mem
+ vq->vq_split.ring.desc[i].addr = cvq->hdr_mem
+ sizeof(struct virtio_net_ctrl_hdr)
+ sizeof(ctrl->status) + sizeof(uint8_t) * sum;
vq->vq_split.ring.desc[i].len = dlen[k];
@@ -143,7 +143,7 @@ virtio_send_command_split(struct virtnet_ctl *cvq,
}
vq->vq_split.ring.desc[i].flags = VRING_DESC_F_WRITE;
- vq->vq_split.ring.desc[i].addr = cvq->virtio_net_hdr_mem
+ vq->vq_split.ring.desc[i].addr = cvq->hdr_mem
+ sizeof(struct virtio_net_ctrl_hdr);
vq->vq_split.ring.desc[i].len = sizeof(ctrl->status);
vq->vq_free_cnt--;
@@ -186,7 +186,7 @@ virtio_send_command_split(struct virtnet_ctl *cvq,
PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\nvq->vq_desc_head_idx=%d",
vq->vq_free_cnt, vq->vq_desc_head_idx);
- result = cvq->virtio_net_hdr_mz->addr;
+ result = cvq->hdr_mz->addr;
return result;
}
@@ -216,8 +216,7 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl, int *
return -1;
}
- memcpy(cvq->virtio_net_hdr_mz->addr, ctrl,
- sizeof(struct virtio_pmd_ctrl));
+ memcpy(cvq->hdr_mz->addr, ctrl, sizeof(struct virtio_pmd_ctrl));
if (virtio_with_packed_queue(vq->hw))
result = virtio_send_command_packed(cvq, ctrl, dlen, pkt_num);
diff --git a/drivers/net/virtio/virtio_cvq.h b/drivers/net/virtio/virtio_cvq.h
index 224dc81422..226561e6b8 100644
--- a/drivers/net/virtio/virtio_cvq.h
+++ b/drivers/net/virtio/virtio_cvq.h
@@ -106,11 +106,10 @@ struct virtio_net_ctrl_hdr {
typedef uint8_t virtio_net_ctrl_ack;
struct virtnet_ctl {
- /**< memzone to populate hdr. */
- const struct rte_memzone *virtio_net_hdr_mz;
- rte_iova_t virtio_net_hdr_mem; /**< hdr for each xmit packet */
- uint16_t port_id; /**< Device port identifier. */
- const struct rte_memzone *mz; /**< mem zone to populate CTL ring. */
+ const struct rte_memzone *hdr_mz; /**< memzone to populate hdr. */
+ rte_iova_t hdr_mem; /**< hdr for each xmit packet */
+ uint16_t port_id; /**< Device port identifier. */
+ const struct rte_memzone *mz; /**< mem zone to populate CTL ring. */
rte_spinlock_t lock; /**< spinlock for control queue. */
void (*notify_queue)(struct virtqueue *vq, void *cookie); /**< notify ops. */
void *notify_cookie; /**< cookie for notify ops */
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 8db8771f4d..cead5f0884 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -259,19 +259,97 @@ virtio_control_queue_notify(struct virtqueue *vq, __rte_unused void *cookie)
virtqueue_notify(vq);
}
+static int
+virtio_alloc_queue_headers(struct virtqueue *vq, int numa_node, const char *name)
+{
+ char hdr_name[VIRTQUEUE_MAX_NAME_SZ];
+ const struct rte_memzone **hdr_mz;
+ rte_iova_t *hdr_mem;
+ ssize_t size;
+ int queue_type;
+
+ queue_type = virtio_get_queue_type(vq->hw, vq->vq_queue_index);
+ switch (queue_type) {
+ case VTNET_TQ:
+ /*
+ * For each xmit packet, allocate a virtio_net_hdr
+ * and indirect ring elements
+ */
+ size = vq->vq_nentries * sizeof(struct virtio_tx_region);
+ hdr_mz = &vq->txq.hdr_mz;
+ hdr_mem = &vq->txq.hdr_mem;
+ break;
+ case VTNET_CQ:
+ /* Allocate a page for control vq command, data and status */
+ size = rte_mem_page_size();
+ hdr_mz = &vq->cq.hdr_mz;
+ hdr_mem = &vq->cq.hdr_mem;
+ break;
+ case VTNET_RQ:
+ /* fallthrough */
+ default:
+ return 0;
+ }
+
+ snprintf(hdr_name, sizeof(hdr_name), "%s_hdr", name);
+ *hdr_mz = rte_memzone_reserve_aligned(hdr_name, size, numa_node,
+ RTE_MEMZONE_IOVA_CONTIG, RTE_CACHE_LINE_SIZE);
+ if (*hdr_mz == NULL) {
+ if (rte_errno == EEXIST)
+ *hdr_mz = rte_memzone_lookup(hdr_name);
+ if (*hdr_mz == NULL)
+ return -ENOMEM;
+ }
+
+ memset((*hdr_mz)->addr, 0, size);
+
+ if (vq->hw->use_va)
+ *hdr_mem = (uintptr_t)(*hdr_mz)->addr;
+ else
+ *hdr_mem = (uintptr_t)(*hdr_mz)->iova;
+
+ return 0;
+}
+
+static void
+virtio_free_queue_headers(struct virtqueue *vq)
+{
+ const struct rte_memzone **hdr_mz;
+ rte_iova_t *hdr_mem;
+ int queue_type;
+
+ queue_type = virtio_get_queue_type(vq->hw, vq->vq_queue_index);
+ switch (queue_type) {
+ case VTNET_TQ:
+ hdr_mz = &vq->txq.hdr_mz;
+ hdr_mem = &vq->txq.hdr_mem;
+ break;
+ case VTNET_CQ:
+ hdr_mz = &vq->cq.hdr_mz;
+ hdr_mem = &vq->cq.hdr_mem;
+ break;
+ case VTNET_RQ:
+ /* fallthrough */
+ default:
+ return;
+ }
+
+ rte_memzone_free(*hdr_mz);
+ *hdr_mz = NULL;
+ *hdr_mem = 0;
+}
+
static int
virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
{
char vq_name[VIRTQUEUE_MAX_NAME_SZ];
- char vq_hdr_name[VIRTQUEUE_MAX_NAME_SZ];
- const struct rte_memzone *mz = NULL, *hdr_mz = NULL;
+ const struct rte_memzone *mz = NULL;
unsigned int vq_size, size;
struct virtio_hw *hw = dev->data->dev_private;
struct virtnet_rx *rxvq = NULL;
struct virtnet_tx *txvq = NULL;
struct virtnet_ctl *cvq = NULL;
struct virtqueue *vq;
- size_t sz_hdr_mz = 0;
void *sw_ring = NULL;
int queue_type = virtio_get_queue_type(hw, queue_idx);
int ret;
@@ -297,22 +375,12 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
return -EINVAL;
}
- snprintf(vq_name, sizeof(vq_name), "port%d_vq%d",
- dev->data->port_id, queue_idx);
+ snprintf(vq_name, sizeof(vq_name), "port%d_vq%d", dev->data->port_id, queue_idx);
size = RTE_ALIGN_CEIL(sizeof(*vq) +
vq_size * sizeof(struct vq_desc_extra),
RTE_CACHE_LINE_SIZE);
- if (queue_type == VTNET_TQ) {
- /*
- * For each xmit packet, allocate a virtio_net_hdr
- * and indirect ring elements
- */
- sz_hdr_mz = vq_size * sizeof(struct virtio_tx_region);
- } else if (queue_type == VTNET_CQ) {
- /* Allocate a page for control vq command, data and status */
- sz_hdr_mz = rte_mem_page_size();
- }
+
vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE,
numa_node);
@@ -366,20 +434,10 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
virtio_init_vring(vq);
- if (sz_hdr_mz) {
- snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
- dev->data->port_id, queue_idx);
- hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
- numa_node, RTE_MEMZONE_IOVA_CONTIG,
- RTE_CACHE_LINE_SIZE);
- if (hdr_mz == NULL) {
- if (rte_errno == EEXIST)
- hdr_mz = rte_memzone_lookup(vq_hdr_name);
- if (hdr_mz == NULL) {
- ret = -ENOMEM;
- goto free_mz;
- }
- }
+ ret = virtio_alloc_queue_headers(vq, numa_node, vq_name);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to alloc queue headers");
+ goto free_mz;
}
if (queue_type == VTNET_RQ) {
@@ -411,21 +469,9 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
txvq = &vq->txq;
txvq->port_id = dev->data->port_id;
txvq->mz = mz;
- txvq->virtio_net_hdr_mz = hdr_mz;
- if (hw->use_va)
- txvq->virtio_net_hdr_mem = (uintptr_t)hdr_mz->addr;
- else
- txvq->virtio_net_hdr_mem = hdr_mz->iova;
} else if (queue_type == VTNET_CQ) {
cvq = &vq->cq;
cvq->mz = mz;
- cvq->virtio_net_hdr_mz = hdr_mz;
- if (hw->use_va)
- cvq->virtio_net_hdr_mem = (uintptr_t)hdr_mz->addr;
- else
- cvq->virtio_net_hdr_mem = hdr_mz->iova;
- memset(cvq->virtio_net_hdr_mz->addr, 0, rte_mem_page_size());
-
hw->cvq = cvq;
vq->cq.notify_queue = &virtio_control_queue_notify;
}
@@ -439,18 +485,15 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
struct virtio_tx_region *txr;
unsigned int i;
- txr = hdr_mz->addr;
- memset(txr, 0, vq_size * sizeof(*txr));
+ txr = txvq->hdr_mz->addr;
for (i = 0; i < vq_size; i++) {
/* first indirect descriptor is always the tx header */
if (!virtio_with_packed_queue(hw)) {
struct vring_desc *start_dp = txr[i].tx_indir;
vring_desc_init_split(start_dp,
RTE_DIM(txr[i].tx_indir));
- start_dp->addr = txvq->virtio_net_hdr_mem
- + i * sizeof(*txr)
- + offsetof(struct virtio_tx_region,
- tx_hdr);
+ start_dp->addr = txvq->hdr_mem + i * sizeof(*txr)
+ + offsetof(struct virtio_tx_region, tx_hdr);
start_dp->len = hw->vtnet_hdr_size;
start_dp->flags = VRING_DESC_F_NEXT;
} else {
@@ -458,10 +501,8 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
txr[i].tx_packed_indir;
vring_desc_init_indirect_packed(start_dp,
RTE_DIM(txr[i].tx_packed_indir));
- start_dp->addr = txvq->virtio_net_hdr_mem
- + i * sizeof(*txr)
- + offsetof(struct virtio_tx_region,
- tx_hdr);
+ start_dp->addr = txvq->hdr_mem + i * sizeof(*txr)
+ + offsetof(struct virtio_tx_region, tx_hdr);
start_dp->len = hw->vtnet_hdr_size;
}
}
@@ -481,7 +522,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
free_sw_ring:
rte_free(sw_ring);
free_hdr_mz:
- rte_memzone_free(hdr_mz);
+ virtio_free_queue_headers(vq);
free_mz:
rte_memzone_free(mz);
free_vq:
@@ -514,12 +555,12 @@ virtio_free_queues(struct virtio_hw *hw)
rte_memzone_free(vq->rxq.mz);
} else if (queue_type == VTNET_TQ) {
rte_memzone_free(vq->txq.mz);
- rte_memzone_free(vq->txq.virtio_net_hdr_mz);
} else {
rte_memzone_free(vq->cq.mz);
- rte_memzone_free(vq->cq.virtio_net_hdr_mz);
}
+ virtio_free_queue_headers(vq);
+
rte_free(vq);
hw->vqs[i] = NULL;
}
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index d9d40832e0..bd95e8ceb5 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -542,7 +542,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
uint16_t needed, int use_indirect, int can_push,
int in_order)
{
- struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;
+ struct virtio_tx_region *txr = txvq->hdr_mz->addr;
struct vq_desc_extra *dxp;
struct virtqueue *vq = virtnet_txq_to_vq(txvq);
struct vring_desc *start_dp;
@@ -579,9 +579,8 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
* the first slot in indirect ring is already preset
* to point to the header in reserved region
*/
- start_dp[idx].addr = txvq->virtio_net_hdr_mem +
- RTE_PTR_DIFF(&txr[idx].tx_indir, txr);
- start_dp[idx].len = (seg_num + 1) * sizeof(struct vring_desc);
+ start_dp[idx].addr = txvq->hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_indir, txr);
+ start_dp[idx].len = (seg_num + 1) * sizeof(struct vring_desc);
start_dp[idx].flags = VRING_DESC_F_INDIRECT;
hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;
@@ -592,9 +591,8 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
/* setup first tx ring slot to point to header
* stored in reserved region.
*/
- start_dp[idx].addr = txvq->virtio_net_hdr_mem +
- RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
- start_dp[idx].len = vq->hw->vtnet_hdr_size;
+ start_dp[idx].addr = txvq->hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
+ start_dp[idx].len = vq->hw->vtnet_hdr_size;
start_dp[idx].flags = VRING_DESC_F_NEXT;
hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;
diff --git a/drivers/net/virtio/virtio_rxtx.h b/drivers/net/virtio/virtio_rxtx.h
index 6ee3a13100..226c722d64 100644
--- a/drivers/net/virtio/virtio_rxtx.h
+++ b/drivers/net/virtio/virtio_rxtx.h
@@ -33,15 +33,13 @@ struct virtnet_rx {
};
struct virtnet_tx {
- /**< memzone to populate hdr. */
- const struct rte_memzone *virtio_net_hdr_mz;
- rte_iova_t virtio_net_hdr_mem; /**< hdr for each xmit packet */
+ const struct rte_memzone *hdr_mz; /**< memzone to populate hdr. */
+ rte_iova_t hdr_mem; /**< hdr for each xmit packet */
- uint16_t queue_id; /**< DPDK queue index. */
- uint16_t port_id; /**< Device port identifier. */
+ uint16_t queue_id; /**< DPDK queue index. */
+ uint16_t port_id; /**< Device port identifier. */
- /* Statistics */
- struct virtnet_stats stats;
+ struct virtnet_stats stats; /* Statistics */
const struct rte_memzone *mz; /**< mem zone to populate TX ring. */
};
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index c98d696e62..3b174a5923 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -200,10 +200,9 @@ virtqueue_txvq_reset_packed(struct virtqueue *vq)
vq->vq_packed.event_flags_shadow = 0;
txvq = &vq->txq;
- txr = txvq->virtio_net_hdr_mz->addr;
+ txr = txvq->hdr_mz->addr;
memset(txvq->mz->addr, 0, txvq->mz->len);
- memset(txvq->virtio_net_hdr_mz->addr, 0,
- txvq->virtio_net_hdr_mz->len);
+ memset(txvq->hdr_mz->addr, 0, txvq->hdr_mz->len);
for (desc_idx = 0; desc_idx < vq->vq_nentries; desc_idx++) {
dxp = &vq->vq_descx[desc_idx];
@@ -217,8 +216,7 @@ virtqueue_txvq_reset_packed(struct virtqueue *vq)
start_dp = txr[desc_idx].tx_packed_indir;
vring_desc_init_indirect_packed(start_dp,
RTE_DIM(txr[desc_idx].tx_packed_indir));
- start_dp->addr = txvq->virtio_net_hdr_mem
- + desc_idx * sizeof(*txr)
+ start_dp->addr = txvq->hdr_mem + desc_idx * sizeof(*txr)
+ offsetof(struct virtio_tx_region, tx_hdr);
start_dp->len = vq->hw->vtnet_hdr_size;
}
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 62f472850e..f5058f362c 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -604,7 +604,7 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
uint16_t needed, int use_indirect, int can_push,
int in_order)
{
- struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;
+ struct virtio_tx_region *txr = txvq->hdr_mz->addr;
struct vq_desc_extra *dxp;
struct virtqueue *vq = virtnet_txq_to_vq(txvq);
struct vring_packed_desc *start_dp, *head_dp;
@@ -646,10 +646,8 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
* the first slot in indirect ring is already preset
* to point to the header in reserved region
*/
- start_dp[idx].addr = txvq->virtio_net_hdr_mem +
- RTE_PTR_DIFF(&txr[idx].tx_packed_indir, txr);
- start_dp[idx].len = (seg_num + 1) *
- sizeof(struct vring_packed_desc);
+ start_dp[idx].addr = txvq->hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_packed_indir, txr);
+ start_dp[idx].len = (seg_num + 1) * sizeof(struct vring_packed_desc);
/* Packed descriptor id needs to be restored when inorder. */
if (in_order)
start_dp[idx].id = idx;
@@ -665,9 +663,8 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
/* setup first tx ring slot to point to header
* stored in reserved region.
*/
- start_dp[idx].addr = txvq->virtio_net_hdr_mem +
- RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
- start_dp[idx].len = vq->hw->vtnet_hdr_size;
+ start_dp[idx].addr = txvq->hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
+ start_dp[idx].len = vq->hw->vtnet_hdr_size;
hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;
idx++;
if (idx >= vq->vq_nentries) {
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 04/21] net/virtio: remove port ID info from Rx queue
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (2 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 03/21] net/virtio: virtqueue headers alloc refactoring Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-30 7:51 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 05/21] net/virtio: remove unused fields in Tx queue struct Maxime Coquelin
` (17 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
The port ID information is duplicated in several places.
This patch removes it from the virtnet_rx struct as it can
be found in virtio_hw struct.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_ethdev.c | 1 -
drivers/net/virtio/virtio_rxtx.c | 25 ++++++++++---------------
drivers/net/virtio/virtio_rxtx.h | 1 -
drivers/net/virtio/virtio_rxtx_packed.c | 3 +--
drivers/net/virtio/virtio_rxtx_simple.c | 3 ++-
drivers/net/virtio/virtio_rxtx_simple.h | 5 +++--
6 files changed, 16 insertions(+), 22 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index cead5f0884..1c68e5a283 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -462,7 +462,6 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
vq->sw_ring = sw_ring;
rxvq = &vq->rxq;
- rxvq->port_id = dev->data->port_id;
rxvq->mz = mz;
rxvq->fake_mbuf = fake_mbuf;
} else if (queue_type == VTNET_TQ) {
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index bd95e8ceb5..45c04aa3f8 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -1024,7 +1024,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
continue;
}
- rxm->port = rxvq->port_id;
+ rxm->port = hw->port_id;
rxm->data_off = RTE_PKTMBUF_HEADROOM;
rxm->ol_flags = 0;
rxm->vlan_tci = 0;
@@ -1066,8 +1066,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
}
nb_enqueued += free_cnt;
} else {
- struct rte_eth_dev *dev =
- &rte_eth_devices[rxvq->port_id];
+ struct rte_eth_dev *dev = &rte_eth_devices[hw->port_id];
dev->data->rx_mbuf_alloc_failed += free_cnt;
}
}
@@ -1127,7 +1126,7 @@ virtio_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts,
continue;
}
- rxm->port = rxvq->port_id;
+ rxm->port = hw->port_id;
rxm->data_off = RTE_PKTMBUF_HEADROOM;
rxm->ol_flags = 0;
rxm->vlan_tci = 0;
@@ -1169,8 +1168,7 @@ virtio_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts,
}
nb_enqueued += free_cnt;
} else {
- struct rte_eth_dev *dev =
- &rte_eth_devices[rxvq->port_id];
+ struct rte_eth_dev *dev = &rte_eth_devices[hw->port_id];
dev->data->rx_mbuf_alloc_failed += free_cnt;
}
}
@@ -1258,7 +1256,7 @@ virtio_recv_pkts_inorder(void *rx_queue,
rxm->pkt_len = (uint32_t)(len[i] - hdr_size);
rxm->data_len = (uint16_t)(len[i] - hdr_size);
- rxm->port = rxvq->port_id;
+ rxm->port = hw->port_id;
rx_pkts[nb_rx] = rxm;
prev = rxm;
@@ -1352,8 +1350,7 @@ virtio_recv_pkts_inorder(void *rx_queue,
}
nb_enqueued += free_cnt;
} else {
- struct rte_eth_dev *dev =
- &rte_eth_devices[rxvq->port_id];
+ struct rte_eth_dev *dev = &rte_eth_devices[hw->port_id];
dev->data->rx_mbuf_alloc_failed += free_cnt;
}
}
@@ -1437,7 +1434,7 @@ virtio_recv_mergeable_pkts(void *rx_queue,
rxm->pkt_len = (uint32_t)(len[i] - hdr_size);
rxm->data_len = (uint16_t)(len[i] - hdr_size);
- rxm->port = rxvq->port_id;
+ rxm->port = hw->port_id;
rx_pkts[nb_rx] = rxm;
prev = rxm;
@@ -1530,8 +1527,7 @@ virtio_recv_mergeable_pkts(void *rx_queue,
}
nb_enqueued += free_cnt;
} else {
- struct rte_eth_dev *dev =
- &rte_eth_devices[rxvq->port_id];
+ struct rte_eth_dev *dev = &rte_eth_devices[hw->port_id];
dev->data->rx_mbuf_alloc_failed += free_cnt;
}
}
@@ -1610,7 +1606,7 @@ virtio_recv_mergeable_pkts_packed(void *rx_queue,
rxm->pkt_len = (uint32_t)(len[i] - hdr_size);
rxm->data_len = (uint16_t)(len[i] - hdr_size);
- rxm->port = rxvq->port_id;
+ rxm->port = hw->port_id;
rx_pkts[nb_rx] = rxm;
prev = rxm;
@@ -1699,8 +1695,7 @@ virtio_recv_mergeable_pkts_packed(void *rx_queue,
}
nb_enqueued += free_cnt;
} else {
- struct rte_eth_dev *dev =
- &rte_eth_devices[rxvq->port_id];
+ struct rte_eth_dev *dev = &rte_eth_devices[hw->port_id];
dev->data->rx_mbuf_alloc_failed += free_cnt;
}
}
diff --git a/drivers/net/virtio/virtio_rxtx.h b/drivers/net/virtio/virtio_rxtx.h
index 226c722d64..97de9eb0a3 100644
--- a/drivers/net/virtio/virtio_rxtx.h
+++ b/drivers/net/virtio/virtio_rxtx.h
@@ -24,7 +24,6 @@ struct virtnet_rx {
struct rte_mempool *mpool; /**< mempool for mbuf allocation */
uint16_t queue_id; /**< DPDK queue index. */
- uint16_t port_id; /**< Device port identifier. */
/* Statistics */
struct virtnet_stats stats;
diff --git a/drivers/net/virtio/virtio_rxtx_packed.c b/drivers/net/virtio/virtio_rxtx_packed.c
index 45cf39df22..5f7d4903bc 100644
--- a/drivers/net/virtio/virtio_rxtx_packed.c
+++ b/drivers/net/virtio/virtio_rxtx_packed.c
@@ -124,8 +124,7 @@ virtio_recv_pkts_packed_vec(void *rx_queue,
free_cnt);
nb_enqueued += free_cnt;
} else {
- struct rte_eth_dev *dev =
- &rte_eth_devices[rxvq->port_id];
+ struct rte_eth_dev *dev = &rte_eth_devices[hw->port_id];
dev->data->rx_mbuf_alloc_failed += free_cnt;
}
}
diff --git a/drivers/net/virtio/virtio_rxtx_simple.c b/drivers/net/virtio/virtio_rxtx_simple.c
index f248869a8f..438256970d 100644
--- a/drivers/net/virtio/virtio_rxtx_simple.c
+++ b/drivers/net/virtio/virtio_rxtx_simple.c
@@ -30,12 +30,13 @@
int __rte_cold
virtio_rxq_vec_setup(struct virtnet_rx *rxq)
{
+ struct virtqueue *vq = virtnet_rxq_to_vq(rxq);
uintptr_t p;
struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
mb_def.nb_segs = 1;
mb_def.data_off = RTE_PKTMBUF_HEADROOM;
- mb_def.port = rxq->port_id;
+ mb_def.port = vq->hw->port_id;
rte_mbuf_refcnt_set(&mb_def, 1);
/* prevent compiler reordering: rearm_data covers previous fields */
diff --git a/drivers/net/virtio/virtio_rxtx_simple.h b/drivers/net/virtio/virtio_rxtx_simple.h
index d8f96e0434..8e235f4dbc 100644
--- a/drivers/net/virtio/virtio_rxtx_simple.h
+++ b/drivers/net/virtio/virtio_rxtx_simple.h
@@ -32,8 +32,9 @@ virtio_rxq_rearm_vec(struct virtnet_rx *rxvq)
ret = rte_mempool_get_bulk(rxvq->mpool, (void **)sw_ring,
RTE_VIRTIO_VPMD_RX_REARM_THRESH);
if (unlikely(ret)) {
- rte_eth_devices[rxvq->port_id].data->rx_mbuf_alloc_failed +=
- RTE_VIRTIO_VPMD_RX_REARM_THRESH;
+ struct rte_eth_dev *dev = &rte_eth_devices[vq->hw->port_id];
+
+ dev->data->rx_mbuf_alloc_failed += RTE_VIRTIO_VPMD_RX_REARM_THRESH;
return;
}
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 05/21] net/virtio: remove unused fields in Tx queue struct
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (3 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 04/21] net/virtio: remove port ID info from Rx queue Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-30 7:51 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 06/21] net/virtio: remove unused queue ID field in Rx queue Maxime Coquelin
` (16 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
The port and queue IDs are not used in virtnet_tx struct,
this patch removes them.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_ethdev.c | 1 -
drivers/net/virtio/virtio_rxtx.c | 1 -
drivers/net/virtio/virtio_rxtx.h | 3 ---
3 files changed, 5 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 1c68e5a283..a581fae408 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -466,7 +466,6 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
rxvq->fake_mbuf = fake_mbuf;
} else if (queue_type == VTNET_TQ) {
txvq = &vq->txq;
- txvq->port_id = dev->data->port_id;
txvq->mz = mz;
} else if (queue_type == VTNET_CQ) {
cvq = &vq->cq;
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 45c04aa3f8..304403d46c 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -831,7 +831,6 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev,
vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
txvq = &vq->txq;
- txvq->queue_id = queue_idx;
tx_free_thresh = tx_conf->tx_free_thresh;
if (tx_free_thresh == 0)
diff --git a/drivers/net/virtio/virtio_rxtx.h b/drivers/net/virtio/virtio_rxtx.h
index 97de9eb0a3..9bbcf32f66 100644
--- a/drivers/net/virtio/virtio_rxtx.h
+++ b/drivers/net/virtio/virtio_rxtx.h
@@ -35,9 +35,6 @@ struct virtnet_tx {
const struct rte_memzone *hdr_mz; /**< memzone to populate hdr. */
rte_iova_t hdr_mem; /**< hdr for each xmit packet */
- uint16_t queue_id; /**< DPDK queue index. */
- uint16_t port_id; /**< Device port identifier. */
-
struct virtnet_stats stats; /* Statistics */
const struct rte_memzone *mz; /**< mem zone to populate TX ring. */
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 06/21] net/virtio: remove unused queue ID field in Rx queue
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (4 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 05/21] net/virtio: remove unused fields in Tx queue struct Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-30 7:52 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 07/21] net/virtio: remove unused Port ID in control queue Maxime Coquelin
` (15 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
This patch removes the queue ID field in virtnet_rx struct.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_rxtx.c | 1 -
drivers/net/virtio/virtio_rxtx.h | 2 --
2 files changed, 3 deletions(-)
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 304403d46c..4f69b97f41 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -703,7 +703,6 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
rxvq = &vq->rxq;
- rxvq->queue_id = queue_idx;
rxvq->mpool = mp;
dev->data->rx_queues[queue_idx] = rxvq;
diff --git a/drivers/net/virtio/virtio_rxtx.h b/drivers/net/virtio/virtio_rxtx.h
index 9bbcf32f66..a5fe3ea95c 100644
--- a/drivers/net/virtio/virtio_rxtx.h
+++ b/drivers/net/virtio/virtio_rxtx.h
@@ -23,8 +23,6 @@ struct virtnet_rx {
uint64_t mbuf_initializer; /**< value to init mbufs. */
struct rte_mempool *mpool; /**< mempool for mbuf allocation */
- uint16_t queue_id; /**< DPDK queue index. */
-
/* Statistics */
struct virtnet_stats stats;
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 07/21] net/virtio: remove unused Port ID in control queue
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (5 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 06/21] net/virtio: remove unused queue ID field in Rx queue Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-30 7:52 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 08/21] net/virtio: move vring memzone to virtqueue struct Maxime Coquelin
` (14 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
This patch removes the unused port ID information from
virtnet_ctl struct.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_cvq.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/net/virtio/virtio_cvq.h b/drivers/net/virtio/virtio_cvq.h
index 226561e6b8..0ff326b063 100644
--- a/drivers/net/virtio/virtio_cvq.h
+++ b/drivers/net/virtio/virtio_cvq.h
@@ -108,7 +108,6 @@ typedef uint8_t virtio_net_ctrl_ack;
struct virtnet_ctl {
const struct rte_memzone *hdr_mz; /**< memzone to populate hdr. */
rte_iova_t hdr_mem; /**< hdr for each xmit packet */
- uint16_t port_id; /**< Device port identifier. */
const struct rte_memzone *mz; /**< mem zone to populate CTL ring. */
rte_spinlock_t lock; /**< spinlock for control queue. */
void (*notify_queue)(struct virtqueue *vq, void *cookie); /**< notify ops. */
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 08/21] net/virtio: move vring memzone to virtqueue struct
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (6 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 07/21] net/virtio: remove unused Port ID in control queue Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-30 7:52 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 09/21] net/virtio: refactor indirect desc headers init Maxime Coquelin
` (13 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
Whatever its type (Rx, Tx or Ctl), all the virtqueue
require a memzone for the vrings. This patch moves its
pointer to the virtqueue struct, simplifying the code.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_cvq.h | 1 -
drivers/net/virtio/virtio_ethdev.c | 11 ++---------
drivers/net/virtio/virtio_rxtx.h | 4 ----
drivers/net/virtio/virtqueue.c | 6 ++----
drivers/net/virtio/virtqueue.h | 1 +
5 files changed, 5 insertions(+), 18 deletions(-)
diff --git a/drivers/net/virtio/virtio_cvq.h b/drivers/net/virtio/virtio_cvq.h
index 0ff326b063..70739ae04b 100644
--- a/drivers/net/virtio/virtio_cvq.h
+++ b/drivers/net/virtio/virtio_cvq.h
@@ -108,7 +108,6 @@ typedef uint8_t virtio_net_ctrl_ack;
struct virtnet_ctl {
const struct rte_memzone *hdr_mz; /**< memzone to populate hdr. */
rte_iova_t hdr_mem; /**< hdr for each xmit packet */
- const struct rte_memzone *mz; /**< mem zone to populate CTL ring. */
rte_spinlock_t lock; /**< spinlock for control queue. */
void (*notify_queue)(struct virtqueue *vq, void *cookie); /**< notify ops. */
void *notify_cookie; /**< cookie for notify ops */
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index a581fae408..b546916a9f 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -423,6 +423,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
memset(mz->addr, 0, mz->len);
+ vq->mz = mz;
if (hw->use_va)
vq->vq_ring_mem = (uintptr_t)mz->addr;
else
@@ -462,14 +463,11 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
vq->sw_ring = sw_ring;
rxvq = &vq->rxq;
- rxvq->mz = mz;
rxvq->fake_mbuf = fake_mbuf;
} else if (queue_type == VTNET_TQ) {
txvq = &vq->txq;
- txvq->mz = mz;
} else if (queue_type == VTNET_CQ) {
cvq = &vq->cq;
- cvq->mz = mz;
hw->cvq = cvq;
vq->cq.notify_queue = &virtio_control_queue_notify;
}
@@ -550,15 +548,10 @@ virtio_free_queues(struct virtio_hw *hw)
if (queue_type == VTNET_RQ) {
rte_free(vq->rxq.fake_mbuf);
rte_free(vq->sw_ring);
- rte_memzone_free(vq->rxq.mz);
- } else if (queue_type == VTNET_TQ) {
- rte_memzone_free(vq->txq.mz);
- } else {
- rte_memzone_free(vq->cq.mz);
}
virtio_free_queue_headers(vq);
-
+ rte_memzone_free(vq->mz);
rte_free(vq);
hw->vqs[i] = NULL;
}
diff --git a/drivers/net/virtio/virtio_rxtx.h b/drivers/net/virtio/virtio_rxtx.h
index a5fe3ea95c..57af630110 100644
--- a/drivers/net/virtio/virtio_rxtx.h
+++ b/drivers/net/virtio/virtio_rxtx.h
@@ -25,8 +25,6 @@ struct virtnet_rx {
/* Statistics */
struct virtnet_stats stats;
-
- const struct rte_memzone *mz; /**< mem zone to populate RX ring. */
};
struct virtnet_tx {
@@ -34,8 +32,6 @@ struct virtnet_tx {
rte_iova_t hdr_mem; /**< hdr for each xmit packet */
struct virtnet_stats stats; /* Statistics */
-
- const struct rte_memzone *mz; /**< mem zone to populate TX ring. */
};
int virtio_rxq_vec_setup(struct virtnet_rx *rxvq);
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 3b174a5923..41e3529546 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -148,7 +148,6 @@ virtqueue_rxvq_reset_packed(struct virtqueue *vq)
{
int size = vq->vq_nentries;
struct vq_desc_extra *dxp;
- struct virtnet_rx *rxvq;
uint16_t desc_idx;
vq->vq_used_cons_idx = 0;
@@ -162,8 +161,7 @@ virtqueue_rxvq_reset_packed(struct virtqueue *vq)
vq->vq_packed.event_flags_shadow = 0;
vq->vq_packed.cached_flags |= VRING_DESC_F_WRITE;
- rxvq = &vq->rxq;
- memset(rxvq->mz->addr, 0, rxvq->mz->len);
+ memset(vq->mz->addr, 0, vq->mz->len);
for (desc_idx = 0; desc_idx < vq->vq_nentries; desc_idx++) {
dxp = &vq->vq_descx[desc_idx];
@@ -201,7 +199,7 @@ virtqueue_txvq_reset_packed(struct virtqueue *vq)
txvq = &vq->txq;
txr = txvq->hdr_mz->addr;
- memset(txvq->mz->addr, 0, txvq->mz->len);
+ memset(vq->mz->addr, 0, vq->mz->len);
memset(txvq->hdr_mz->addr, 0, txvq->hdr_mz->len);
for (desc_idx = 0; desc_idx < vq->vq_nentries; desc_idx++) {
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index f5058f362c..8b7bfae643 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -201,6 +201,7 @@ struct virtqueue {
struct virtnet_ctl cq;
};
+ const struct rte_memzone *mz; /**< mem zone to populate ring. */
rte_iova_t vq_ring_mem; /**< physical address of vring,
* or virtual address for virtio_user. */
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 09/21] net/virtio: refactor indirect desc headers init
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (7 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 08/21] net/virtio: move vring memzone to virtqueue struct Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-30 7:52 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 10/21] net/virtio: alloc Rx SW ring only if vectorized path Maxime Coquelin
` (12 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
This patch refactors the indirect descriptors headers
initialization in a dedicated function, and makes it used
by both queue init and reset functions.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_ethdev.c | 30 +------------
drivers/net/virtio/virtqueue.c | 68 ++++++++++++++++++++++--------
drivers/net/virtio/virtqueue.h | 2 +
3 files changed, 54 insertions(+), 46 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index b546916a9f..8b17b450ec 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -347,7 +347,6 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
unsigned int vq_size, size;
struct virtio_hw *hw = dev->data->dev_private;
struct virtnet_rx *rxvq = NULL;
- struct virtnet_tx *txvq = NULL;
struct virtnet_ctl *cvq = NULL;
struct virtqueue *vq;
void *sw_ring = NULL;
@@ -465,7 +464,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
rxvq = &vq->rxq;
rxvq->fake_mbuf = fake_mbuf;
} else if (queue_type == VTNET_TQ) {
- txvq = &vq->txq;
+ virtqueue_txq_indirect_headers_init(vq);
} else if (queue_type == VTNET_CQ) {
cvq = &vq->cq;
hw->cvq = cvq;
@@ -477,33 +476,6 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
else
vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_iova);
- if (queue_type == VTNET_TQ) {
- struct virtio_tx_region *txr;
- unsigned int i;
-
- txr = txvq->hdr_mz->addr;
- for (i = 0; i < vq_size; i++) {
- /* first indirect descriptor is always the tx header */
- if (!virtio_with_packed_queue(hw)) {
- struct vring_desc *start_dp = txr[i].tx_indir;
- vring_desc_init_split(start_dp,
- RTE_DIM(txr[i].tx_indir));
- start_dp->addr = txvq->hdr_mem + i * sizeof(*txr)
- + offsetof(struct virtio_tx_region, tx_hdr);
- start_dp->len = hw->vtnet_hdr_size;
- start_dp->flags = VRING_DESC_F_NEXT;
- } else {
- struct vring_packed_desc *start_dp =
- txr[i].tx_packed_indir;
- vring_desc_init_indirect_packed(start_dp,
- RTE_DIM(txr[i].tx_packed_indir));
- start_dp->addr = txvq->hdr_mem + i * sizeof(*txr)
- + offsetof(struct virtio_tx_region, tx_hdr);
- start_dp->len = hw->vtnet_hdr_size;
- }
- }
- }
-
if (VIRTIO_OPS(hw)->setup_queue(hw, vq) < 0) {
PMD_INIT_LOG(ERR, "setup_queue failed");
ret = -EINVAL;
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 41e3529546..fb651a4ca3 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -143,6 +143,54 @@ virtqueue_rxvq_flush(struct virtqueue *vq)
virtqueue_rxvq_flush_split(vq);
}
+static void
+virtqueue_txq_indirect_header_init_packed(struct virtqueue *vq, uint32_t idx)
+{
+ struct virtio_tx_region *txr;
+ struct vring_packed_desc *desc;
+ rte_iova_t hdr_mem;
+
+ txr = vq->txq.hdr_mz->addr;
+ hdr_mem = vq->txq.hdr_mem;
+ desc = txr[idx].tx_packed_indir;
+
+ vring_desc_init_indirect_packed(desc, RTE_DIM(txr[idx].tx_packed_indir));
+ desc->addr = hdr_mem + idx * sizeof(*txr) + offsetof(struct virtio_tx_region, tx_hdr);
+ desc->len = vq->hw->vtnet_hdr_size;
+}
+
+static void
+virtqueue_txq_indirect_header_init_split(struct virtqueue *vq, uint32_t idx)
+{
+ struct virtio_tx_region *txr;
+ struct vring_desc *desc;
+ rte_iova_t hdr_mem;
+
+ txr = vq->txq.hdr_mz->addr;
+ hdr_mem = vq->txq.hdr_mem;
+ desc = txr[idx].tx_indir;
+
+ vring_desc_init_split(desc, RTE_DIM(txr[idx].tx_indir));
+ desc->addr = hdr_mem + idx * sizeof(*txr) + offsetof(struct virtio_tx_region, tx_hdr);
+ desc->len = vq->hw->vtnet_hdr_size;
+ desc->flags = VRING_DESC_F_NEXT;
+}
+
+void
+virtqueue_txq_indirect_headers_init(struct virtqueue *vq)
+{
+ uint32_t i;
+
+ if (!virtio_with_feature(vq->hw, VIRTIO_RING_F_INDIRECT_DESC))
+ return;
+
+ for (i = 0; i < vq->vq_nentries; i++)
+ if (virtio_with_packed_queue(vq->hw))
+ virtqueue_txq_indirect_header_init_packed(vq, i);
+ else
+ virtqueue_txq_indirect_header_init_split(vq, i);
+}
+
int
virtqueue_rxvq_reset_packed(struct virtqueue *vq)
{
@@ -182,10 +230,7 @@ virtqueue_txvq_reset_packed(struct virtqueue *vq)
{
int size = vq->vq_nentries;
struct vq_desc_extra *dxp;
- struct virtnet_tx *txvq;
uint16_t desc_idx;
- struct virtio_tx_region *txr;
- struct vring_packed_desc *start_dp;
vq->vq_used_cons_idx = 0;
vq->vq_desc_head_idx = 0;
@@ -197,10 +242,8 @@ virtqueue_txvq_reset_packed(struct virtqueue *vq)
vq->vq_packed.cached_flags = VRING_PACKED_DESC_F_AVAIL;
vq->vq_packed.event_flags_shadow = 0;
- txvq = &vq->txq;
- txr = txvq->hdr_mz->addr;
memset(vq->mz->addr, 0, vq->mz->len);
- memset(txvq->hdr_mz->addr, 0, txvq->hdr_mz->len);
+ memset(vq->txq.hdr_mz->addr, 0, vq->txq.hdr_mz->len);
for (desc_idx = 0; desc_idx < vq->vq_nentries; desc_idx++) {
dxp = &vq->vq_descx[desc_idx];
@@ -208,20 +251,11 @@ virtqueue_txvq_reset_packed(struct virtqueue *vq)
rte_pktmbuf_free(dxp->cookie);
dxp->cookie = NULL;
}
-
- if (virtio_with_feature(vq->hw, VIRTIO_RING_F_INDIRECT_DESC)) {
- /* first indirect descriptor is always the tx header */
- start_dp = txr[desc_idx].tx_packed_indir;
- vring_desc_init_indirect_packed(start_dp,
- RTE_DIM(txr[desc_idx].tx_packed_indir));
- start_dp->addr = txvq->hdr_mem + desc_idx * sizeof(*txr)
- + offsetof(struct virtio_tx_region, tx_hdr);
- start_dp->len = vq->hw->vtnet_hdr_size;
- }
}
+ virtqueue_txq_indirect_headers_init(vq);
vring_desc_init_packed(vq, size);
-
virtqueue_disable_intr(vq);
+
return 0;
}
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 8b7bfae643..d453c3ec26 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -384,6 +384,8 @@ int virtqueue_rxvq_reset_packed(struct virtqueue *vq);
int virtqueue_txvq_reset_packed(struct virtqueue *vq);
+void virtqueue_txq_indirect_headers_init(struct virtqueue *vq);
+
static inline int
virtqueue_full(const struct virtqueue *vq)
{
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 10/21] net/virtio: alloc Rx SW ring only if vectorized path
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (8 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 09/21] net/virtio: refactor indirect desc headers init Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-30 7:49 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 11/21] net/virtio: extract virtqueue init from virtio queue init Maxime Coquelin
` (11 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
This patch only allocates the SW ring when vectorized
datapath is used. It also moves the SW ring and fake mbuf
in the virtnet_rx struct since this is Rx-only.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_ethdev.c | 88 ++++++++++++-------
drivers/net/virtio/virtio_rxtx.c | 8 +-
drivers/net/virtio/virtio_rxtx.h | 4 +-
drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
.../net/virtio/virtio_rxtx_simple_altivec.c | 4 +-
drivers/net/virtio/virtio_rxtx_simple_neon.c | 4 +-
drivers/net/virtio/virtio_rxtx_simple_sse.c | 4 +-
drivers/net/virtio/virtqueue.c | 6 +-
drivers/net/virtio/virtqueue.h | 1 -
9 files changed, 72 insertions(+), 49 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 8b17b450ec..46dd5606f6 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -339,6 +339,47 @@ virtio_free_queue_headers(struct virtqueue *vq)
*hdr_mem = 0;
}
+static int
+virtio_rxq_sw_ring_alloc(struct virtqueue *vq, int numa_node)
+{
+ void *sw_ring;
+ struct rte_mbuf *mbuf;
+ size_t size;
+
+ /* SW ring is only used with vectorized datapath */
+ if (!vq->hw->use_vec_rx)
+ return 0;
+
+ size = (RTE_PMD_VIRTIO_RX_MAX_BURST + vq->vq_nentries) * sizeof(vq->rxq.sw_ring[0]);
+
+ sw_ring = rte_zmalloc_socket("sw_ring", size, RTE_CACHE_LINE_SIZE, numa_node);
+ if (!sw_ring) {
+ PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
+ return -ENOMEM;
+ }
+
+ mbuf = rte_zmalloc_socket("sw_ring", sizeof(*mbuf), RTE_CACHE_LINE_SIZE, numa_node);
+ if (!mbuf) {
+ PMD_INIT_LOG(ERR, "can not allocate fake mbuf");
+ rte_free(sw_ring);
+ return -ENOMEM;
+ }
+
+ vq->rxq.sw_ring = sw_ring;
+ vq->rxq.fake_mbuf = mbuf;
+
+ return 0;
+}
+
+static void
+virtio_rxq_sw_ring_free(struct virtqueue *vq)
+{
+ rte_free(vq->rxq.fake_mbuf);
+ vq->rxq.fake_mbuf = NULL;
+ rte_free(vq->rxq.sw_ring);
+ vq->rxq.sw_ring = NULL;
+}
+
static int
virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
{
@@ -346,14 +387,11 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
const struct rte_memzone *mz = NULL;
unsigned int vq_size, size;
struct virtio_hw *hw = dev->data->dev_private;
- struct virtnet_rx *rxvq = NULL;
struct virtnet_ctl *cvq = NULL;
struct virtqueue *vq;
- void *sw_ring = NULL;
int queue_type = virtio_get_queue_type(hw, queue_idx);
int ret;
int numa_node = dev->device->numa_node;
- struct rte_mbuf *fake_mbuf = NULL;
PMD_INIT_LOG(INFO, "setting up queue: %u on NUMA node %d",
queue_idx, numa_node);
@@ -441,28 +479,9 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
}
if (queue_type == VTNET_RQ) {
- size_t sz_sw = (RTE_PMD_VIRTIO_RX_MAX_BURST + vq_size) *
- sizeof(vq->sw_ring[0]);
-
- sw_ring = rte_zmalloc_socket("sw_ring", sz_sw,
- RTE_CACHE_LINE_SIZE, numa_node);
- if (!sw_ring) {
- PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
- ret = -ENOMEM;
+ ret = virtio_rxq_sw_ring_alloc(vq, numa_node);
+ if (ret)
goto free_hdr_mz;
- }
-
- fake_mbuf = rte_zmalloc_socket("sw_ring", sizeof(*fake_mbuf),
- RTE_CACHE_LINE_SIZE, numa_node);
- if (!fake_mbuf) {
- PMD_INIT_LOG(ERR, "can not allocate fake mbuf");
- ret = -ENOMEM;
- goto free_sw_ring;
- }
-
- vq->sw_ring = sw_ring;
- rxvq = &vq->rxq;
- rxvq->fake_mbuf = fake_mbuf;
} else if (queue_type == VTNET_TQ) {
virtqueue_txq_indirect_headers_init(vq);
} else if (queue_type == VTNET_CQ) {
@@ -486,9 +505,8 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
clean_vq:
hw->cvq = NULL;
- rte_free(fake_mbuf);
-free_sw_ring:
- rte_free(sw_ring);
+ if (queue_type == VTNET_RQ)
+ virtio_rxq_sw_ring_free(vq);
free_hdr_mz:
virtio_free_queue_headers(vq);
free_mz:
@@ -519,7 +537,7 @@ virtio_free_queues(struct virtio_hw *hw)
queue_type = virtio_get_queue_type(hw, i);
if (queue_type == VTNET_RQ) {
rte_free(vq->rxq.fake_mbuf);
- rte_free(vq->sw_ring);
+ rte_free(vq->rxq.sw_ring);
}
virtio_free_queue_headers(vq);
@@ -2195,6 +2213,11 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
rte_spinlock_init(&hw->state_lock);
+ if (vectorized) {
+ hw->use_vec_rx = 1;
+ hw->use_vec_tx = 1;
+ }
+
/* reset device and negotiate default features */
ret = virtio_init_device(eth_dev, VIRTIO_PMD_DEFAULT_GUEST_FEATURES);
if (ret < 0)
@@ -2202,12 +2225,11 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
if (vectorized) {
if (!virtio_with_packed_queue(hw)) {
- hw->use_vec_rx = 1;
+ hw->use_vec_tx = 0;
} else {
-#if defined(CC_AVX512_SUPPORT) || defined(RTE_ARCH_ARM)
- hw->use_vec_rx = 1;
- hw->use_vec_tx = 1;
-#else
+#if !defined(CC_AVX512_SUPPORT) && !defined(RTE_ARCH_ARM)
+ hw->use_vec_rx = 0;
+ hw->use_vec_tx = 0;
PMD_DRV_LOG(INFO,
"building environment do not support packed ring vectorized");
#endif
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 4f69b97f41..2d0afd3302 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -737,9 +737,11 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)
virtio_rxq_vec_setup(rxvq);
}
- memset(rxvq->fake_mbuf, 0, sizeof(*rxvq->fake_mbuf));
- for (desc_idx = 0; desc_idx < RTE_PMD_VIRTIO_RX_MAX_BURST; desc_idx++)
- vq->sw_ring[vq->vq_nentries + desc_idx] = rxvq->fake_mbuf;
+ if (hw->use_vec_rx) {
+ memset(rxvq->fake_mbuf, 0, sizeof(*rxvq->fake_mbuf));
+ for (desc_idx = 0; desc_idx < RTE_PMD_VIRTIO_RX_MAX_BURST; desc_idx++)
+ vq->rxq.sw_ring[vq->vq_nentries + desc_idx] = rxvq->fake_mbuf;
+ }
if (hw->use_vec_rx && !virtio_with_packed_queue(hw)) {
while (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
diff --git a/drivers/net/virtio/virtio_rxtx.h b/drivers/net/virtio/virtio_rxtx.h
index 57af630110..afc4b74534 100644
--- a/drivers/net/virtio/virtio_rxtx.h
+++ b/drivers/net/virtio/virtio_rxtx.h
@@ -18,8 +18,8 @@ struct virtnet_stats {
};
struct virtnet_rx {
- /* dummy mbuf, for wraparound when processing RX ring. */
- struct rte_mbuf *fake_mbuf;
+ struct rte_mbuf **sw_ring; /**< RX software ring. */
+ struct rte_mbuf *fake_mbuf; /**< dummy mbuf, for wraparound when processing RX ring. */
uint64_t mbuf_initializer; /**< value to init mbufs. */
struct rte_mempool *mpool; /**< mempool for mbuf allocation */
diff --git a/drivers/net/virtio/virtio_rxtx_simple.h b/drivers/net/virtio/virtio_rxtx_simple.h
index 8e235f4dbc..79196ed86e 100644
--- a/drivers/net/virtio/virtio_rxtx_simple.h
+++ b/drivers/net/virtio/virtio_rxtx_simple.h
@@ -26,7 +26,7 @@ virtio_rxq_rearm_vec(struct virtnet_rx *rxvq)
struct virtqueue *vq = virtnet_rxq_to_vq(rxvq);
desc_idx = vq->vq_avail_idx & (vq->vq_nentries - 1);
- sw_ring = &vq->sw_ring[desc_idx];
+ sw_ring = &vq->rxq.sw_ring[desc_idx];
start_dp = &vq->vq_split.ring.desc[desc_idx];
ret = rte_mempool_get_bulk(rxvq->mpool, (void **)sw_ring,
diff --git a/drivers/net/virtio/virtio_rxtx_simple_altivec.c b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
index e7f0ed6068..7910efc153 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_altivec.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
@@ -103,8 +103,8 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
desc_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
rused = &vq->vq_split.ring.used->ring[desc_idx];
- sw_ring = &vq->sw_ring[desc_idx];
- sw_ring_end = &vq->sw_ring[vq->vq_nentries];
+ sw_ring = &vq->rxq.sw_ring[desc_idx];
+ sw_ring_end = &vq->rxq.sw_ring[vq->vq_nentries];
rte_prefetch0(rused);
diff --git a/drivers/net/virtio/virtio_rxtx_simple_neon.c b/drivers/net/virtio/virtio_rxtx_simple_neon.c
index 7fd92d1b0c..ffaa139bd6 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_neon.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_neon.c
@@ -101,8 +101,8 @@ virtio_recv_pkts_vec(void *rx_queue,
desc_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
rused = &vq->vq_split.ring.used->ring[desc_idx];
- sw_ring = &vq->sw_ring[desc_idx];
- sw_ring_end = &vq->sw_ring[vq->vq_nentries];
+ sw_ring = &vq->rxq.sw_ring[desc_idx];
+ sw_ring_end = &vq->rxq.sw_ring[vq->vq_nentries];
rte_prefetch_non_temporal(rused);
diff --git a/drivers/net/virtio/virtio_rxtx_simple_sse.c b/drivers/net/virtio/virtio_rxtx_simple_sse.c
index 7577f5e86d..ed608fbf2e 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_sse.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_sse.c
@@ -101,8 +101,8 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
desc_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
rused = &vq->vq_split.ring.used->ring[desc_idx];
- sw_ring = &vq->sw_ring[desc_idx];
- sw_ring_end = &vq->sw_ring[vq->vq_nentries];
+ sw_ring = &vq->rxq.sw_ring[desc_idx];
+ sw_ring_end = &vq->rxq.sw_ring[vq->vq_nentries];
rte_prefetch0(rused);
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index fb651a4ca3..7a84796513 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -38,9 +38,9 @@ virtqueue_detach_unused(struct virtqueue *vq)
continue;
if (start > end && (idx >= start || idx < end))
continue;
- cookie = vq->sw_ring[idx];
+ cookie = vq->rxq.sw_ring[idx];
if (cookie != NULL) {
- vq->sw_ring[idx] = NULL;
+ vq->rxq.sw_ring[idx] = NULL;
return cookie;
}
} else {
@@ -100,7 +100,7 @@ virtqueue_rxvq_flush_split(struct virtqueue *vq)
uep = &vq->vq_split.ring.used->ring[used_idx];
if (hw->use_vec_rx) {
desc_idx = used_idx;
- rte_pktmbuf_free(vq->sw_ring[desc_idx]);
+ rte_pktmbuf_free(vq->rxq.sw_ring[desc_idx]);
vq->vq_free_cnt++;
} else if (hw->use_inorder_rx) {
desc_idx = (uint16_t)uep->id;
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index d453c3ec26..d7f8ee79bb 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -206,7 +206,6 @@ struct virtqueue {
* or virtual address for virtio_user. */
uint16_t *notify_addr;
- struct rte_mbuf **sw_ring; /**< RX software ring. */
struct vq_desc_extra vq_descx[];
};
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 11/21] net/virtio: extract virtqueue init from virtio queue init
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (9 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 10/21] net/virtio: alloc Rx SW ring only if vectorized path Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-30 7:53 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 12/21] net/virtio-user: fix device starting failure handling Maxime Coquelin
` (10 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
This patch extracts the virtqueue initialization out of
the Virtio ethdev queue initialization, as preliminary
work to provide a way for Virtio-user to allocate its
shadow control virtqueue.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_ethdev.c | 261 ++--------------------------
drivers/net/virtio/virtqueue.c | 266 +++++++++++++++++++++++++++++
drivers/net/virtio/virtqueue.h | 5 +
3 files changed, 282 insertions(+), 250 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 46dd5606f6..8f657d2d90 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -221,173 +221,18 @@ virtio_get_nr_vq(struct virtio_hw *hw)
return nr_vq;
}
-static void
-virtio_init_vring(struct virtqueue *vq)
-{
- int size = vq->vq_nentries;
- uint8_t *ring_mem = vq->vq_ring_virt_mem;
-
- PMD_INIT_FUNC_TRACE();
-
- memset(ring_mem, 0, vq->vq_ring_size);
-
- vq->vq_used_cons_idx = 0;
- vq->vq_desc_head_idx = 0;
- vq->vq_avail_idx = 0;
- vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
- vq->vq_free_cnt = vq->vq_nentries;
- memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
- if (virtio_with_packed_queue(vq->hw)) {
- vring_init_packed(&vq->vq_packed.ring, ring_mem,
- VIRTIO_VRING_ALIGN, size);
- vring_desc_init_packed(vq, size);
- } else {
- struct vring *vr = &vq->vq_split.ring;
-
- vring_init_split(vr, ring_mem, VIRTIO_VRING_ALIGN, size);
- vring_desc_init_split(vr->desc, size);
- }
- /*
- * Disable device(host) interrupting guest
- */
- virtqueue_disable_intr(vq);
-}
-
static void
virtio_control_queue_notify(struct virtqueue *vq, __rte_unused void *cookie)
{
virtqueue_notify(vq);
}
-static int
-virtio_alloc_queue_headers(struct virtqueue *vq, int numa_node, const char *name)
-{
- char hdr_name[VIRTQUEUE_MAX_NAME_SZ];
- const struct rte_memzone **hdr_mz;
- rte_iova_t *hdr_mem;
- ssize_t size;
- int queue_type;
-
- queue_type = virtio_get_queue_type(vq->hw, vq->vq_queue_index);
- switch (queue_type) {
- case VTNET_TQ:
- /*
- * For each xmit packet, allocate a virtio_net_hdr
- * and indirect ring elements
- */
- size = vq->vq_nentries * sizeof(struct virtio_tx_region);
- hdr_mz = &vq->txq.hdr_mz;
- hdr_mem = &vq->txq.hdr_mem;
- break;
- case VTNET_CQ:
- /* Allocate a page for control vq command, data and status */
- size = rte_mem_page_size();
- hdr_mz = &vq->cq.hdr_mz;
- hdr_mem = &vq->cq.hdr_mem;
- break;
- case VTNET_RQ:
- /* fallthrough */
- default:
- return 0;
- }
-
- snprintf(hdr_name, sizeof(hdr_name), "%s_hdr", name);
- *hdr_mz = rte_memzone_reserve_aligned(hdr_name, size, numa_node,
- RTE_MEMZONE_IOVA_CONTIG, RTE_CACHE_LINE_SIZE);
- if (*hdr_mz == NULL) {
- if (rte_errno == EEXIST)
- *hdr_mz = rte_memzone_lookup(hdr_name);
- if (*hdr_mz == NULL)
- return -ENOMEM;
- }
-
- memset((*hdr_mz)->addr, 0, size);
-
- if (vq->hw->use_va)
- *hdr_mem = (uintptr_t)(*hdr_mz)->addr;
- else
- *hdr_mem = (uintptr_t)(*hdr_mz)->iova;
-
- return 0;
-}
-
-static void
-virtio_free_queue_headers(struct virtqueue *vq)
-{
- const struct rte_memzone **hdr_mz;
- rte_iova_t *hdr_mem;
- int queue_type;
-
- queue_type = virtio_get_queue_type(vq->hw, vq->vq_queue_index);
- switch (queue_type) {
- case VTNET_TQ:
- hdr_mz = &vq->txq.hdr_mz;
- hdr_mem = &vq->txq.hdr_mem;
- break;
- case VTNET_CQ:
- hdr_mz = &vq->cq.hdr_mz;
- hdr_mem = &vq->cq.hdr_mem;
- break;
- case VTNET_RQ:
- /* fallthrough */
- default:
- return;
- }
-
- rte_memzone_free(*hdr_mz);
- *hdr_mz = NULL;
- *hdr_mem = 0;
-}
-
-static int
-virtio_rxq_sw_ring_alloc(struct virtqueue *vq, int numa_node)
-{
- void *sw_ring;
- struct rte_mbuf *mbuf;
- size_t size;
-
- /* SW ring is only used with vectorized datapath */
- if (!vq->hw->use_vec_rx)
- return 0;
-
- size = (RTE_PMD_VIRTIO_RX_MAX_BURST + vq->vq_nentries) * sizeof(vq->rxq.sw_ring[0]);
-
- sw_ring = rte_zmalloc_socket("sw_ring", size, RTE_CACHE_LINE_SIZE, numa_node);
- if (!sw_ring) {
- PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
- return -ENOMEM;
- }
-
- mbuf = rte_zmalloc_socket("sw_ring", sizeof(*mbuf), RTE_CACHE_LINE_SIZE, numa_node);
- if (!mbuf) {
- PMD_INIT_LOG(ERR, "can not allocate fake mbuf");
- rte_free(sw_ring);
- return -ENOMEM;
- }
-
- vq->rxq.sw_ring = sw_ring;
- vq->rxq.fake_mbuf = mbuf;
-
- return 0;
-}
-
-static void
-virtio_rxq_sw_ring_free(struct virtqueue *vq)
-{
- rte_free(vq->rxq.fake_mbuf);
- vq->rxq.fake_mbuf = NULL;
- rte_free(vq->rxq.sw_ring);
- vq->rxq.sw_ring = NULL;
-}
-
static int
virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
{
char vq_name[VIRTQUEUE_MAX_NAME_SZ];
- const struct rte_memzone *mz = NULL;
- unsigned int vq_size, size;
+ unsigned int vq_size;
struct virtio_hw *hw = dev->data->dev_private;
- struct virtnet_ctl *cvq = NULL;
struct virtqueue *vq;
int queue_type = virtio_get_queue_type(hw, queue_idx);
int ret;
@@ -414,87 +259,19 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
snprintf(vq_name, sizeof(vq_name), "port%d_vq%d", dev->data->port_id, queue_idx);
- size = RTE_ALIGN_CEIL(sizeof(*vq) +
- vq_size * sizeof(struct vq_desc_extra),
- RTE_CACHE_LINE_SIZE);
-
-
- vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE,
- numa_node);
- if (vq == NULL) {
- PMD_INIT_LOG(ERR, "can not allocate vq");
+ vq = virtqueue_alloc(hw, queue_idx, vq_size, queue_type, numa_node, vq_name);
+ if (!vq) {
+ PMD_INIT_LOG(ERR, "virtqueue init failed");
return -ENOMEM;
}
- hw->vqs[queue_idx] = vq;
- vq->hw = hw;
- vq->vq_queue_index = queue_idx;
- vq->vq_nentries = vq_size;
- if (virtio_with_packed_queue(hw)) {
- vq->vq_packed.used_wrap_counter = 1;
- vq->vq_packed.cached_flags = VRING_PACKED_DESC_F_AVAIL;
- vq->vq_packed.event_flags_shadow = 0;
- if (queue_type == VTNET_RQ)
- vq->vq_packed.cached_flags |= VRING_DESC_F_WRITE;
- }
-
- /*
- * Reserve a memzone for vring elements
- */
- size = vring_size(hw, vq_size, VIRTIO_VRING_ALIGN);
- vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_VRING_ALIGN);
- PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d",
- size, vq->vq_ring_size);
-
- mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
- numa_node, RTE_MEMZONE_IOVA_CONTIG,
- VIRTIO_VRING_ALIGN);
- if (mz == NULL) {
- if (rte_errno == EEXIST)
- mz = rte_memzone_lookup(vq_name);
- if (mz == NULL) {
- ret = -ENOMEM;
- goto free_vq;
- }
- }
-
- memset(mz->addr, 0, mz->len);
-
- vq->mz = mz;
- if (hw->use_va)
- vq->vq_ring_mem = (uintptr_t)mz->addr;
- else
- vq->vq_ring_mem = mz->iova;
-
- vq->vq_ring_virt_mem = mz->addr;
- PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem: 0x%" PRIx64, vq->vq_ring_mem);
- PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: %p", vq->vq_ring_virt_mem);
-
- virtio_init_vring(vq);
+ hw->vqs[queue_idx] = vq;
- ret = virtio_alloc_queue_headers(vq, numa_node, vq_name);
- if (ret) {
- PMD_INIT_LOG(ERR, "Failed to alloc queue headers");
- goto free_mz;
- }
-
- if (queue_type == VTNET_RQ) {
- ret = virtio_rxq_sw_ring_alloc(vq, numa_node);
- if (ret)
- goto free_hdr_mz;
- } else if (queue_type == VTNET_TQ) {
- virtqueue_txq_indirect_headers_init(vq);
- } else if (queue_type == VTNET_CQ) {
- cvq = &vq->cq;
- hw->cvq = cvq;
+ if (queue_type == VTNET_CQ) {
+ hw->cvq = &vq->cq;
vq->cq.notify_queue = &virtio_control_queue_notify;
}
- if (hw->use_va)
- vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_addr);
- else
- vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_iova);
-
if (VIRTIO_OPS(hw)->setup_queue(hw, vq) < 0) {
PMD_INIT_LOG(ERR, "setup_queue failed");
ret = -EINVAL;
@@ -504,15 +281,9 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
return 0;
clean_vq:
- hw->cvq = NULL;
- if (queue_type == VTNET_RQ)
- virtio_rxq_sw_ring_free(vq);
-free_hdr_mz:
- virtio_free_queue_headers(vq);
-free_mz:
- rte_memzone_free(mz);
-free_vq:
- rte_free(vq);
+ if (queue_type == VTNET_CQ)
+ hw->cvq = NULL;
+ virtqueue_free(vq);
hw->vqs[queue_idx] = NULL;
return ret;
@@ -523,7 +294,6 @@ virtio_free_queues(struct virtio_hw *hw)
{
uint16_t nr_vq = virtio_get_nr_vq(hw);
struct virtqueue *vq;
- int queue_type;
uint16_t i;
if (hw->vqs == NULL)
@@ -533,16 +303,7 @@ virtio_free_queues(struct virtio_hw *hw)
vq = hw->vqs[i];
if (!vq)
continue;
-
- queue_type = virtio_get_queue_type(hw, i);
- if (queue_type == VTNET_RQ) {
- rte_free(vq->rxq.fake_mbuf);
- rte_free(vq->rxq.sw_ring);
- }
-
- virtio_free_queue_headers(vq);
- rte_memzone_free(vq->mz);
- rte_free(vq);
+ virtqueue_free(vq);
hw->vqs[i] = NULL;
}
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 7a84796513..1d836f2530 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -2,8 +2,12 @@
* Copyright(c) 2010-2015 Intel Corporation
*/
#include <stdint.h>
+#include <unistd.h>
+#include <rte_eal_paging.h>
+#include <rte_malloc.h>
#include <rte_mbuf.h>
+#include <rte_memzone.h>
#include "virtqueue.h"
#include "virtio_logs.h"
@@ -259,3 +263,265 @@ virtqueue_txvq_reset_packed(struct virtqueue *vq)
return 0;
}
+
+
+static void
+virtio_init_vring(struct virtqueue *vq)
+{
+ int size = vq->vq_nentries;
+ uint8_t *ring_mem = vq->vq_ring_virt_mem;
+
+ PMD_INIT_FUNC_TRACE();
+
+ memset(ring_mem, 0, vq->vq_ring_size);
+
+ vq->vq_used_cons_idx = 0;
+ vq->vq_desc_head_idx = 0;
+ vq->vq_avail_idx = 0;
+ vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
+ vq->vq_free_cnt = vq->vq_nentries;
+ memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
+ if (virtio_with_packed_queue(vq->hw)) {
+ vring_init_packed(&vq->vq_packed.ring, ring_mem,
+ VIRTIO_VRING_ALIGN, size);
+ vring_desc_init_packed(vq, size);
+ } else {
+ struct vring *vr = &vq->vq_split.ring;
+
+ vring_init_split(vr, ring_mem, VIRTIO_VRING_ALIGN, size);
+ vring_desc_init_split(vr->desc, size);
+ }
+ /*
+ * Disable device(host) interrupting guest
+ */
+ virtqueue_disable_intr(vq);
+}
+
+static int
+virtio_alloc_queue_headers(struct virtqueue *vq, int numa_node, const char *name)
+{
+ char hdr_name[VIRTQUEUE_MAX_NAME_SZ];
+ const struct rte_memzone **hdr_mz;
+ rte_iova_t *hdr_mem;
+ ssize_t size;
+ int queue_type;
+
+ queue_type = virtio_get_queue_type(vq->hw, vq->vq_queue_index);
+ switch (queue_type) {
+ case VTNET_TQ:
+ /*
+ * For each xmit packet, allocate a virtio_net_hdr
+ * and indirect ring elements
+ */
+ size = vq->vq_nentries * sizeof(struct virtio_tx_region);
+ hdr_mz = &vq->txq.hdr_mz;
+ hdr_mem = &vq->txq.hdr_mem;
+ break;
+ case VTNET_CQ:
+ /* Allocate a page for control vq command, data and status */
+ size = rte_mem_page_size();
+ hdr_mz = &vq->cq.hdr_mz;
+ hdr_mem = &vq->cq.hdr_mem;
+ break;
+ case VTNET_RQ:
+ /* fallthrough */
+ default:
+ return 0;
+ }
+
+ snprintf(hdr_name, sizeof(hdr_name), "%s_hdr", name);
+ *hdr_mz = rte_memzone_reserve_aligned(hdr_name, size, numa_node,
+ RTE_MEMZONE_IOVA_CONTIG, RTE_CACHE_LINE_SIZE);
+ if (*hdr_mz == NULL) {
+ if (rte_errno == EEXIST)
+ *hdr_mz = rte_memzone_lookup(hdr_name);
+ if (*hdr_mz == NULL)
+ return -ENOMEM;
+ }
+
+ memset((*hdr_mz)->addr, 0, size);
+
+ if (vq->hw->use_va)
+ *hdr_mem = (uintptr_t)(*hdr_mz)->addr;
+ else
+ *hdr_mem = (uintptr_t)(*hdr_mz)->iova;
+
+ return 0;
+}
+
+static void
+virtio_free_queue_headers(struct virtqueue *vq)
+{
+ const struct rte_memzone **hdr_mz;
+ rte_iova_t *hdr_mem;
+ int queue_type;
+
+ queue_type = virtio_get_queue_type(vq->hw, vq->vq_queue_index);
+ switch (queue_type) {
+ case VTNET_TQ:
+ hdr_mz = &vq->txq.hdr_mz;
+ hdr_mem = &vq->txq.hdr_mem;
+ break;
+ case VTNET_CQ:
+ hdr_mz = &vq->cq.hdr_mz;
+ hdr_mem = &vq->cq.hdr_mem;
+ break;
+ case VTNET_RQ:
+ /* fallthrough */
+ default:
+ return;
+ }
+
+ rte_memzone_free(*hdr_mz);
+ *hdr_mz = NULL;
+ *hdr_mem = 0;
+}
+
+static int
+virtio_rxq_sw_ring_alloc(struct virtqueue *vq, int numa_node)
+{
+ void *sw_ring;
+ struct rte_mbuf *mbuf;
+ size_t size;
+
+ /* SW ring is only used with vectorized datapath */
+ if (!vq->hw->use_vec_rx)
+ return 0;
+
+ size = (RTE_PMD_VIRTIO_RX_MAX_BURST + vq->vq_nentries) * sizeof(vq->rxq.sw_ring[0]);
+
+ sw_ring = rte_zmalloc_socket("sw_ring", size, RTE_CACHE_LINE_SIZE, numa_node);
+ if (!sw_ring) {
+ PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
+ return -ENOMEM;
+ }
+
+ mbuf = rte_zmalloc_socket("sw_ring", sizeof(*mbuf), RTE_CACHE_LINE_SIZE, numa_node);
+ if (!mbuf) {
+ PMD_INIT_LOG(ERR, "can not allocate fake mbuf");
+ rte_free(sw_ring);
+ return -ENOMEM;
+ }
+
+ vq->rxq.sw_ring = sw_ring;
+ vq->rxq.fake_mbuf = mbuf;
+
+ return 0;
+}
+
+static void
+virtio_rxq_sw_ring_free(struct virtqueue *vq)
+{
+ rte_free(vq->rxq.fake_mbuf);
+ vq->rxq.fake_mbuf = NULL;
+ rte_free(vq->rxq.sw_ring);
+ vq->rxq.sw_ring = NULL;
+}
+
+struct virtqueue *
+virtqueue_alloc(struct virtio_hw *hw, uint16_t index, uint16_t num, int type,
+ int node, const char *name)
+{
+ struct virtqueue *vq;
+ const struct rte_memzone *mz;
+ unsigned int size;
+
+ size = sizeof(*vq) + num * sizeof(struct vq_desc_extra);
+ size = RTE_ALIGN_CEIL(size, RTE_CACHE_LINE_SIZE);
+
+ vq = rte_zmalloc_socket(name, size, RTE_CACHE_LINE_SIZE, node);
+ if (vq == NULL) {
+ PMD_INIT_LOG(ERR, "can not allocate vq");
+ return NULL;
+ }
+
+ vq->hw = hw;
+ vq->vq_queue_index = index;
+ vq->vq_nentries = num;
+ if (virtio_with_packed_queue(hw)) {
+ vq->vq_packed.used_wrap_counter = 1;
+ vq->vq_packed.cached_flags = VRING_PACKED_DESC_F_AVAIL;
+ vq->vq_packed.event_flags_shadow = 0;
+ if (type == VTNET_RQ)
+ vq->vq_packed.cached_flags |= VRING_DESC_F_WRITE;
+ }
+
+ /*
+ * Reserve a memzone for vring elements
+ */
+ size = vring_size(hw, num, VIRTIO_VRING_ALIGN);
+ vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_VRING_ALIGN);
+ PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size);
+
+ mz = rte_memzone_reserve_aligned(name, vq->vq_ring_size, node,
+ RTE_MEMZONE_IOVA_CONTIG, VIRTIO_VRING_ALIGN);
+ if (mz == NULL) {
+ if (rte_errno == EEXIST)
+ mz = rte_memzone_lookup(name);
+ if (mz == NULL)
+ goto free_vq;
+ }
+
+ memset(mz->addr, 0, mz->len);
+ vq->mz = mz;
+ vq->vq_ring_virt_mem = mz->addr;
+
+ if (hw->use_va) {
+ vq->vq_ring_mem = (uintptr_t)mz->addr;
+ vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_addr);
+ } else {
+ vq->vq_ring_mem = mz->iova;
+ vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_iova);
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem: 0x%" PRIx64, vq->vq_ring_mem);
+ PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: %p", vq->vq_ring_virt_mem);
+
+ virtio_init_vring(vq);
+
+ if (virtio_alloc_queue_headers(vq, node, name)) {
+ PMD_INIT_LOG(ERR, "Failed to alloc queue headers");
+ goto free_mz;
+ }
+
+ switch (type) {
+ case VTNET_RQ:
+ if (virtio_rxq_sw_ring_alloc(vq, node))
+ goto free_hdr_mz;
+ break;
+ case VTNET_TQ:
+ virtqueue_txq_indirect_headers_init(vq);
+ break;
+ }
+
+ return vq;
+
+free_hdr_mz:
+ virtio_free_queue_headers(vq);
+free_mz:
+ rte_memzone_free(mz);
+free_vq:
+ rte_free(vq);
+
+ return NULL;
+}
+
+void
+virtqueue_free(struct virtqueue *vq)
+{
+ int type;
+
+ type = virtio_get_queue_type(vq->hw, vq->vq_queue_index);
+ switch (type) {
+ case VTNET_RQ:
+ virtio_rxq_sw_ring_free(vq);
+ break;
+ case VTNET_TQ:
+ case VTNET_CQ:
+ virtio_free_queue_headers(vq);
+ break;
+ }
+
+ rte_memzone_free(vq->mz);
+ rte_free(vq);
+}
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index d7f8ee79bb..9d4aba11a3 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -385,6 +385,11 @@ int virtqueue_txvq_reset_packed(struct virtqueue *vq);
void virtqueue_txq_indirect_headers_init(struct virtqueue *vq);
+struct virtqueue *virtqueue_alloc(struct virtio_hw *hw, uint16_t index,
+ uint16_t num, int type, int node, const char *name);
+
+void virtqueue_free(struct virtqueue *vq);
+
static inline int
virtqueue_full(const struct virtqueue *vq)
{
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 12/21] net/virtio-user: fix device starting failure handling
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (10 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 11/21] net/virtio: extract virtqueue init from virtio queue init Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-31 5:20 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 13/21] net/virtio-user: simplify queues setup Maxime Coquelin
` (9 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin, stable
If the device fails to start, read the status from the
device and return early.
Fixes: 57912824615f ("net/virtio-user: support vhost status setting")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_user_ethdev.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index d32abec327..78b1ed9ace 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -90,10 +90,15 @@ virtio_user_set_status(struct virtio_hw *hw, uint8_t status)
if (status & VIRTIO_CONFIG_STATUS_FEATURES_OK &&
~old_status & VIRTIO_CONFIG_STATUS_FEATURES_OK)
virtio_user_dev_set_features(dev);
- if (status & VIRTIO_CONFIG_STATUS_DRIVER_OK)
- virtio_user_start_device(dev);
- else if (status == VIRTIO_CONFIG_STATUS_RESET)
+
+ if (status & VIRTIO_CONFIG_STATUS_DRIVER_OK) {
+ if (virtio_user_start_device(dev)) {
+ virtio_user_dev_update_status(dev);
+ return;
+ }
+ } else if (status == VIRTIO_CONFIG_STATUS_RESET) {
virtio_user_reset(hw);
+ }
virtio_user_dev_set_status(dev, status);
}
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 13/21] net/virtio-user: simplify queues setup
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (11 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 12/21] net/virtio-user: fix device starting failure handling Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-31 5:21 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 14/21] net/virtio-user: use proper type for number of queue pairs Maxime Coquelin
` (8 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
The only reason two loops were needed to iterate over
queues at setup time was to be able to print whether it
was a Tx or Rx queue.
This patch changes queues iteration to a single loop.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_user/virtio_user_dev.c | 16 ++++------------
1 file changed, 4 insertions(+), 12 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 19599aa3f6..873c6aa036 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -118,19 +118,11 @@ static int
virtio_user_queue_setup(struct virtio_user_dev *dev,
int (*fn)(struct virtio_user_dev *, uint32_t))
{
- uint32_t i, queue_sel;
+ uint32_t i;
- for (i = 0; i < dev->max_queue_pairs; ++i) {
- queue_sel = 2 * i + VTNET_SQ_RQ_QUEUE_IDX;
- if (fn(dev, queue_sel) < 0) {
- PMD_DRV_LOG(ERR, "(%s) setup rx vq %u failed", dev->path, i);
- return -1;
- }
- }
- for (i = 0; i < dev->max_queue_pairs; ++i) {
- queue_sel = 2 * i + VTNET_SQ_TQ_QUEUE_IDX;
- if (fn(dev, queue_sel) < 0) {
- PMD_DRV_LOG(INFO, "(%s) setup tx vq %u failed", dev->path, i);
+ for (i = 0; i < dev->max_queue_pairs * 2; ++i) {
+ if (fn(dev, i) < 0) {
+ PMD_DRV_LOG(ERR, "(%s) setup VQ %u failed", dev->path, i);
return -1;
}
}
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 14/21] net/virtio-user: use proper type for number of queue pairs
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (12 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 13/21] net/virtio-user: simplify queues setup Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-31 5:21 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 15/21] net/virtio-user: get max number of queue pairs from device Maxime Coquelin
` (7 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
The number of queue pairs is specified as a 16 bits
unsigned int in the Virtio specification.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_user/virtio_user_dev.c | 2 +-
drivers/net/virtio/virtio_user/virtio_user_dev.h | 6 +++---
drivers/net/virtio/virtio_user_ethdev.c | 2 +-
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 873c6aa036..809c9ef442 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -553,7 +553,7 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
1ULL << VIRTIO_F_RING_PACKED)
int
-virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
+virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
int cq, int queue_size, const char *mac, char **ifname,
int server, int mrg_rxbuf, int in_order, int packed_vq,
enum virtio_user_backend_type backend_type)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h
index 819f6463ba..3c5453eac0 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
@@ -32,8 +32,8 @@ struct virtio_user_dev {
int callfds[VIRTIO_MAX_VIRTQUEUES];
int kickfds[VIRTIO_MAX_VIRTQUEUES];
int mac_specified;
- uint32_t max_queue_pairs;
- uint32_t queue_pairs;
+ uint16_t max_queue_pairs;
+ uint16_t queue_pairs;
uint32_t queue_size;
uint64_t features; /* the negotiated features with driver,
* and will be sync with device
@@ -64,7 +64,7 @@ struct virtio_user_dev {
int virtio_user_dev_set_features(struct virtio_user_dev *dev);
int virtio_user_start_device(struct virtio_user_dev *dev);
int virtio_user_stop_device(struct virtio_user_dev *dev);
-int virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
+int virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
int cq, int queue_size, const char *mac, char **ifname,
int server, int mrg_rxbuf, int in_order,
int packed_vq,
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 78b1ed9ace..6ad5896378 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -655,7 +655,7 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
dev = eth_dev->data->dev_private;
hw = &dev->hw;
- if (virtio_user_dev_init(dev, path, queues, cq,
+ if (virtio_user_dev_init(dev, path, (uint16_t)queues, cq,
queue_size, mac_addr, &ifname, server_mode,
mrg_rxbuf, in_order, packed_vq, backend_type) < 0) {
PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 15/21] net/virtio-user: get max number of queue pairs from device
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (13 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 14/21] net/virtio-user: use proper type for number of queue pairs Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-31 5:21 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 16/21] net/virtio-user: allocate shadow control queue Maxime Coquelin
` (6 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
When supported by the backend (only vDPA for now), this
patch gets the maximum number of queue pairs supported by
the device by querying it in its config space.
This is required for adding backend control queue support,
as is index equals the maximum number of queues supported
by the device as described by the Virtio specification.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
.../net/virtio/virtio_user/virtio_user_dev.c | 93 ++++++++++++++-----
drivers/net/virtio/virtio_user_ethdev.c | 7 --
2 files changed, 71 insertions(+), 29 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 809c9ef442..a3584e7735 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -253,6 +253,50 @@ int virtio_user_stop_device(struct virtio_user_dev *dev)
return -1;
}
+static int
+virtio_user_dev_init_max_queue_pairs(struct virtio_user_dev *dev, uint32_t user_max_qp)
+{
+ int ret;
+
+ if (!(dev->device_features & (1ULL << VIRTIO_NET_F_MQ))) {
+ dev->max_queue_pairs = 1;
+ return 0;
+ }
+
+ if (!dev->ops->get_config) {
+ dev->max_queue_pairs = user_max_qp;
+ return 0;
+ }
+
+ ret = dev->ops->get_config(dev, (uint8_t *)&dev->max_queue_pairs,
+ offsetof(struct virtio_net_config, max_virtqueue_pairs),
+ sizeof(uint16_t));
+ if (ret) {
+ /*
+ * We need to know the max queue pair from the device so that
+ * the control queue gets the right index.
+ */
+ dev->max_queue_pairs = 1;
+ PMD_DRV_LOG(ERR, "(%s) Failed to get max queue pairs from device", dev->path);
+
+ return ret;
+ }
+
+ if (dev->max_queue_pairs > VIRTIO_MAX_VIRTQUEUE_PAIRS) {
+ /*
+ * If the device supports control queue, the control queue
+ * index is max_virtqueue_pairs * 2. Disable MQ if it happens.
+ */
+ PMD_DRV_LOG(ERR, "(%s) Device advertises too many queues (%u, max supported %u)",
+ dev->path, dev->max_queue_pairs, VIRTIO_MAX_VIRTQUEUE_PAIRS);
+ dev->max_queue_pairs = 1;
+
+ return -1;
+ }
+
+ return 0;
+}
+
int
virtio_user_dev_set_mac(struct virtio_user_dev *dev)
{
@@ -511,24 +555,7 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
return -1;
}
- if (virtio_user_dev_init_notify(dev) < 0) {
- PMD_INIT_LOG(ERR, "(%s) Failed to init notifiers", dev->path);
- goto destroy;
- }
-
- if (virtio_user_fill_intr_handle(dev) < 0) {
- PMD_INIT_LOG(ERR, "(%s) Failed to init interrupt handler", dev->path);
- goto uninit;
- }
-
return 0;
-
-uninit:
- virtio_user_dev_uninit_notify(dev);
-destroy:
- dev->ops->destroy(dev);
-
- return -1;
}
/* Use below macro to filter features from vhost backend */
@@ -570,7 +597,6 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
}
dev->started = 0;
- dev->max_queue_pairs = queues;
dev->queue_pairs = 1; /* mq disabled by default */
dev->queue_size = queue_size;
dev->is_server = server;
@@ -591,23 +617,39 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
if (dev->ops->set_owner(dev) < 0) {
PMD_INIT_LOG(ERR, "(%s) Failed to set backend owner", dev->path);
- return -1;
+ goto destroy;
}
if (dev->ops->get_backend_features(&backend_features) < 0) {
PMD_INIT_LOG(ERR, "(%s) Failed to get backend features", dev->path);
- return -1;
+ goto destroy;
}
dev->unsupported_features = ~(VIRTIO_USER_SUPPORTED_FEATURES | backend_features);
if (dev->ops->get_features(dev, &dev->device_features) < 0) {
PMD_INIT_LOG(ERR, "(%s) Failed to get device features", dev->path);
- return -1;
+ goto destroy;
}
virtio_user_dev_init_mac(dev, mac);
+ if (virtio_user_dev_init_max_queue_pairs(dev, queues))
+ dev->unsupported_features |= (1ull << VIRTIO_NET_F_MQ);
+
+ if (dev->max_queue_pairs > 1)
+ cq = 1;
+
+ if (virtio_user_dev_init_notify(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to init notifiers", dev->path);
+ goto destroy;
+ }
+
+ if (virtio_user_fill_intr_handle(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to init interrupt handler", dev->path);
+ goto notify_uninit;
+ }
+
if (!mrg_rxbuf)
dev->unsupported_features |= (1ull << VIRTIO_NET_F_MRG_RXBUF);
@@ -651,11 +693,18 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
if (rte_errno != ENOTSUP) {
PMD_INIT_LOG(ERR, "(%s) Failed to register mem event callback",
dev->path);
- return -1;
+ goto notify_uninit;
}
}
return 0;
+
+notify_uninit:
+ virtio_user_dev_uninit_notify(dev);
+destroy:
+ dev->ops->destroy(dev);
+
+ return -1;
}
void
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 6ad5896378..6c3e875793 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -595,8 +595,6 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
VIRTIO_USER_ARG_CQ_NUM);
goto end;
}
- } else if (queues > 1) {
- cq = 1;
}
if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_PACKED_VQ) == 1) {
@@ -617,11 +615,6 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
}
}
- if (queues > 1 && cq == 0) {
- PMD_INIT_LOG(ERR, "multi-q requires ctrl-q");
- goto end;
- }
-
if (queues > VIRTIO_MAX_VIRTQUEUE_PAIRS) {
PMD_INIT_LOG(ERR, "arg %s %" PRIu64 " exceeds the limit %u",
VIRTIO_USER_ARG_QUEUES_NUM, queues,
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 16/21] net/virtio-user: allocate shadow control queue
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (14 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 15/21] net/virtio-user: get max number of queue pairs from device Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-31 5:21 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 17/21] net/virtio-user: send shadow virtqueue info to the backend Maxime Coquelin
` (5 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
If the backends supports control virtqueue, allocate a
shadow control virtqueue, and implement the notify callback
that writes into the kickfd.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
.../net/virtio/virtio_user/virtio_user_dev.c | 47 ++++++++++++++++++-
.../net/virtio/virtio_user/virtio_user_dev.h | 5 ++
drivers/net/virtio/virtio_user_ethdev.c | 6 +++
3 files changed, 56 insertions(+), 2 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index a3584e7735..16a0e07413 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -146,8 +146,9 @@ virtio_user_dev_set_features(struct virtio_user_dev *dev)
/* Strip VIRTIO_NET_F_MAC, as MAC address is handled in vdev init */
features &= ~(1ull << VIRTIO_NET_F_MAC);
- /* Strip VIRTIO_NET_F_CTRL_VQ, as devices do not really need to know */
- features &= ~(1ull << VIRTIO_NET_F_CTRL_VQ);
+ /* Strip VIRTIO_NET_F_CTRL_VQ if the devices does not really support control VQ */
+ if (!dev->hw_cvq)
+ features &= ~(1ull << VIRTIO_NET_F_CTRL_VQ);
features &= ~(1ull << VIRTIO_NET_F_STATUS);
ret = dev->ops->set_features(dev, features);
if (ret < 0)
@@ -911,6 +912,48 @@ virtio_user_handle_cq(struct virtio_user_dev *dev, uint16_t queue_idx)
}
}
+static void
+virtio_user_control_queue_notify(struct virtqueue *vq, void *cookie)
+{
+ struct virtio_user_dev *dev = cookie;
+ uint64_t buf = 1;
+
+ if (write(dev->kickfds[vq->vq_queue_index], &buf, sizeof(buf)) < 0)
+ PMD_DRV_LOG(ERR, "failed to kick backend: %s",
+ strerror(errno));
+}
+
+int
+virtio_user_dev_create_shadow_cvq(struct virtio_user_dev *dev, struct virtqueue *vq)
+{
+ char name[VIRTQUEUE_MAX_NAME_SZ];
+ struct virtqueue *scvq;
+
+ snprintf(name, sizeof(name), "port%d_shadow_cvq", vq->hw->port_id);
+ scvq = virtqueue_alloc(&dev->hw, vq->vq_queue_index, vq->vq_nentries,
+ VTNET_CQ, SOCKET_ID_ANY, name);
+ if (!scvq) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc shadow control vq\n", dev->path);
+ return -ENOMEM;
+ }
+
+ scvq->cq.notify_queue = &virtio_user_control_queue_notify;
+ scvq->cq.notify_cookie = dev;
+ dev->scvq = scvq;
+
+ return 0;
+}
+
+void
+virtio_user_dev_destroy_shadow_cvq(struct virtio_user_dev *dev)
+{
+ if (!dev->scvq)
+ return;
+
+ virtqueue_free(dev->scvq);
+ dev->scvq = NULL;
+}
+
int
virtio_user_dev_set_status(struct virtio_user_dev *dev, uint8_t status)
{
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h
index 3c5453eac0..e0db4faf3f 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
@@ -58,6 +58,9 @@ struct virtio_user_dev {
pthread_mutex_t mutex;
bool started;
+ bool hw_cvq;
+ struct virtqueue *scvq;
+
void *backend_data;
};
@@ -74,6 +77,8 @@ void virtio_user_handle_cq(struct virtio_user_dev *dev, uint16_t queue_idx);
void virtio_user_handle_cq_packed(struct virtio_user_dev *dev,
uint16_t queue_idx);
uint8_t virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs);
+int virtio_user_dev_create_shadow_cvq(struct virtio_user_dev *dev, struct virtqueue *vq);
+void virtio_user_dev_destroy_shadow_cvq(struct virtio_user_dev *dev);
int virtio_user_dev_set_status(struct virtio_user_dev *dev, uint8_t status);
int virtio_user_dev_update_status(struct virtio_user_dev *dev);
int virtio_user_dev_update_link_state(struct virtio_user_dev *dev);
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 6c3e875793..626bd95b62 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -232,6 +232,9 @@ virtio_user_setup_queue(struct virtio_hw *hw, struct virtqueue *vq)
else
virtio_user_setup_queue_split(vq, dev);
+ if (dev->hw_cvq && hw->cvq && (virtnet_cq_to_vq(hw->cvq) == vq))
+ return virtio_user_dev_create_shadow_cvq(dev, vq);
+
return 0;
}
@@ -251,6 +254,9 @@ virtio_user_del_queue(struct virtio_hw *hw, struct virtqueue *vq)
close(dev->callfds[vq->vq_queue_index]);
close(dev->kickfds[vq->vq_queue_index]);
+
+ if (hw->cvq && (virtnet_cq_to_vq(hw->cvq) == vq) && dev->scvq)
+ virtio_user_dev_destroy_shadow_cvq(dev);
}
static void
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 17/21] net/virtio-user: send shadow virtqueue info to the backend
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (15 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 16/21] net/virtio-user: allocate shadow control queue Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-31 5:22 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 18/21] net/virtio-user: add new callback to enable control queue Maxime Coquelin
` (4 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
This patch adds sending the shadow control queue info
to the backend.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
.../net/virtio/virtio_user/virtio_user_dev.c | 28 ++++++++++++++++---
1 file changed, 24 insertions(+), 4 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 16a0e07413..1a5386a3f6 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -66,6 +66,18 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
.flags = 0, /* disable log */
};
+ if (queue_sel == dev->max_queue_pairs * 2) {
+ if (!dev->scvq) {
+ PMD_INIT_LOG(ERR, "(%s) Shadow control queue expected but missing",
+ dev->path);
+ goto err;
+ }
+
+ /* Use shadow control queue information */
+ vring = &dev->scvq->vq_split.ring;
+ pq_vring = &dev->scvq->vq_packed.ring;
+ }
+
if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) {
addr.desc_user_addr =
(uint64_t)(uintptr_t)pq_vring->desc;
@@ -118,9 +130,13 @@ static int
virtio_user_queue_setup(struct virtio_user_dev *dev,
int (*fn)(struct virtio_user_dev *, uint32_t))
{
- uint32_t i;
+ uint32_t i, nr_vq;
- for (i = 0; i < dev->max_queue_pairs * 2; ++i) {
+ nr_vq = dev->max_queue_pairs * 2;
+ if (dev->hw_cvq)
+ nr_vq++;
+
+ for (i = 0; i < nr_vq; i++) {
if (fn(dev, i) < 0) {
PMD_DRV_LOG(ERR, "(%s) setup VQ %u failed", dev->path, i);
return -1;
@@ -381,11 +397,15 @@ virtio_user_dev_init_mac(struct virtio_user_dev *dev, const char *mac)
static int
virtio_user_dev_init_notify(struct virtio_user_dev *dev)
{
- uint32_t i, j;
+ uint32_t i, j, nr_vq;
int callfd;
int kickfd;
- for (i = 0; i < dev->max_queue_pairs * 2; i++) {
+ nr_vq = dev->max_queue_pairs * 2;
+ if (dev->hw_cvq)
+ nr_vq++;
+
+ for (i = 0; i < nr_vq; i++) {
/* May use invalid flag, but some backend uses kickfd and
* callfd as criteria to judge if dev is alive. so finally we
* use real event_fd.
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 18/21] net/virtio-user: add new callback to enable control queue
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (16 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 17/21] net/virtio-user: send shadow virtqueue info to the backend Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-31 5:22 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 19/21] net/virtio-user: forward control messages to shadow queue Maxime Coquelin
` (3 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
This patch introduces a new callback that is to be called
when the backend supports control virtqueue.
Implementation for Vhost-vDPA backend is added in this patch.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_user/vhost.h | 1 +
drivers/net/virtio/virtio_user/vhost_vdpa.c | 15 +++++++++++++++
drivers/net/virtio/virtio_user/virtio_user_dev.c | 3 +++
3 files changed, 19 insertions(+)
diff --git a/drivers/net/virtio/virtio_user/vhost.h b/drivers/net/virtio/virtio_user/vhost.h
index dfbf6be033..f817cab77a 100644
--- a/drivers/net/virtio/virtio_user/vhost.h
+++ b/drivers/net/virtio/virtio_user/vhost.h
@@ -82,6 +82,7 @@ struct virtio_user_backend_ops {
int (*get_config)(struct virtio_user_dev *dev, uint8_t *data, uint32_t off, uint32_t len);
int (*set_config)(struct virtio_user_dev *dev, const uint8_t *data, uint32_t off,
uint32_t len);
+ int (*cvq_enable)(struct virtio_user_dev *dev, int enable);
int (*enable_qp)(struct virtio_user_dev *dev, uint16_t pair_idx, int enable);
int (*dma_map)(struct virtio_user_dev *dev, void *addr, uint64_t iova, size_t len);
int (*dma_unmap)(struct virtio_user_dev *dev, void *addr, uint64_t iova, size_t len);
diff --git a/drivers/net/virtio/virtio_user/vhost_vdpa.c b/drivers/net/virtio/virtio_user/vhost_vdpa.c
index a0897f8dd1..3fd13d9fac 100644
--- a/drivers/net/virtio/virtio_user/vhost_vdpa.c
+++ b/drivers/net/virtio/virtio_user/vhost_vdpa.c
@@ -564,6 +564,20 @@ vhost_vdpa_destroy(struct virtio_user_dev *dev)
return 0;
}
+static int
+vhost_vdpa_cvq_enable(struct virtio_user_dev *dev, int enable)
+{
+ struct vhost_vring_state state = {
+ .index = dev->max_queue_pairs * 2,
+ .num = enable,
+ };
+
+ if (vhost_vdpa_set_vring_enable(dev, &state))
+ return -1;
+
+ return 0;
+}
+
static int
vhost_vdpa_enable_queue_pair(struct virtio_user_dev *dev,
uint16_t pair_idx,
@@ -629,6 +643,7 @@ struct virtio_user_backend_ops virtio_ops_vdpa = {
.set_status = vhost_vdpa_set_status,
.get_config = vhost_vdpa_get_config,
.set_config = vhost_vdpa_set_config,
+ .cvq_enable = vhost_vdpa_cvq_enable,
.enable_qp = vhost_vdpa_enable_queue_pair,
.dma_map = vhost_vdpa_dma_map_batch,
.dma_unmap = vhost_vdpa_dma_unmap_batch,
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 1a5386a3f6..b0d603ee12 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -767,6 +767,9 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
for (i = q_pairs; i < dev->max_queue_pairs; ++i)
ret |= dev->ops->enable_qp(dev, i, 0);
+ if (dev->scvq)
+ ret |= dev->ops->cvq_enable(dev, 1);
+
dev->queue_pairs = q_pairs;
return ret;
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 19/21] net/virtio-user: forward control messages to shadow queue
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (17 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 18/21] net/virtio-user: add new callback to enable control queue Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2022-11-30 16:54 ` Stephen Hemminger
2022-11-30 15:56 ` [PATCH v1 20/21] net/virtio-user: advertize control VQ support with vDPA Maxime Coquelin
` (2 subsequent siblings)
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
This patch implements control messages forwarding from the
regular control queue to the shadow control queue.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
.../net/virtio/virtio_user/virtio_user_dev.c | 37 ++++++++++++++++---
.../net/virtio/virtio_user/virtio_user_dev.h | 3 --
drivers/net/virtio/virtio_user_ethdev.c | 6 +--
3 files changed, 33 insertions(+), 13 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index b0d603ee12..7c48c9bb29 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -750,7 +750,7 @@ virtio_user_dev_uninit(struct virtio_user_dev *dev)
dev->ops->destroy(dev);
}
-uint8_t
+static uint8_t
virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
{
uint16_t i;
@@ -775,14 +775,17 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
return ret;
}
+#define CVQ_MAX_DATA_DESCS 32
+
static uint32_t
-virtio_user_handle_ctrl_msg(struct virtio_user_dev *dev, struct vring *vring,
+virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vring,
uint16_t idx_hdr)
{
struct virtio_net_ctrl_hdr *hdr;
virtio_net_ctrl_ack status = ~0;
uint16_t i, idx_data, idx_status;
uint32_t n_descs = 0;
+ int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0;
/* locate desc for header, data, and status */
idx_data = vring->desc[idx_hdr].next;
@@ -790,6 +793,7 @@ virtio_user_handle_ctrl_msg(struct virtio_user_dev *dev, struct vring *vring,
i = idx_data;
while (vring->desc[i].flags == VRING_DESC_F_NEXT) {
+ dlen[nb_dlen++] = vring->desc[i].len;
i = vring->desc[i].next;
n_descs++;
}
@@ -811,6 +815,11 @@ virtio_user_handle_ctrl_msg(struct virtio_user_dev *dev, struct vring *vring,
status = 0;
}
+ if (status != 0 || !dev->scvq)
+ goto out;
+
+ status = virtio_send_command(&dev->scvq->cq, (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
+out:
/* Update status */
*(virtio_net_ctrl_ack *)(uintptr_t)vring->desc[idx_status].addr = status;
@@ -836,6 +845,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
uint16_t idx_data, idx_status;
/* initialize to one, header is first */
uint32_t n_descs = 1;
+ int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0;
/* locate desc for header, data, and status */
idx_data = idx_hdr + 1;
@@ -846,6 +856,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
idx_status = idx_data;
while (vring->desc[idx_status].flags & VRING_DESC_F_NEXT) {
+ dlen[nb_dlen++] = vring->desc[idx_status].len;
idx_status++;
if (idx_status >= dev->queue_size)
idx_status -= dev->queue_size;
@@ -866,6 +877,11 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
status = 0;
}
+ if (status != 0 || !dev->scvq)
+ goto out;
+
+ status = virtio_send_command(&dev->scvq->cq, (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
+out:
/* Update status */
*(virtio_net_ctrl_ack *)(uintptr_t)
vring->desc[idx_status].addr = status;
@@ -877,7 +893,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
return n_descs;
}
-void
+static void
virtio_user_handle_cq_packed(struct virtio_user_dev *dev, uint16_t queue_idx)
{
struct virtio_user_queue *vq = &dev->packed_queues[queue_idx];
@@ -909,8 +925,8 @@ virtio_user_handle_cq_packed(struct virtio_user_dev *dev, uint16_t queue_idx)
}
}
-void
-virtio_user_handle_cq(struct virtio_user_dev *dev, uint16_t queue_idx)
+static void
+virtio_user_handle_cq_split(struct virtio_user_dev *dev, uint16_t queue_idx)
{
uint16_t avail_idx, desc_idx;
struct vring_used_elem *uep;
@@ -924,7 +940,7 @@ virtio_user_handle_cq(struct virtio_user_dev *dev, uint16_t queue_idx)
& (vring->num - 1);
desc_idx = vring->avail->ring[avail_idx];
- n_descs = virtio_user_handle_ctrl_msg(dev, vring, desc_idx);
+ n_descs = virtio_user_handle_ctrl_msg_split(dev, vring, desc_idx);
/* Update used ring */
uep = &vring->used->ring[avail_idx];
@@ -935,6 +951,15 @@ virtio_user_handle_cq(struct virtio_user_dev *dev, uint16_t queue_idx)
}
}
+void
+virtio_user_handle_cq(struct virtio_user_dev *dev, uint16_t queue_idx)
+{
+ if (virtio_with_packed_queue(&dev->hw))
+ virtio_user_handle_cq_packed(dev, queue_idx);
+ else
+ virtio_user_handle_cq_split(dev, queue_idx);
+}
+
static void
virtio_user_control_queue_notify(struct virtqueue *vq, void *cookie)
{
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h
index e0db4faf3f..e8753f6019 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
@@ -74,9 +74,6 @@ int virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queue
enum virtio_user_backend_type backend_type);
void virtio_user_dev_uninit(struct virtio_user_dev *dev);
void virtio_user_handle_cq(struct virtio_user_dev *dev, uint16_t queue_idx);
-void virtio_user_handle_cq_packed(struct virtio_user_dev *dev,
- uint16_t queue_idx);
-uint8_t virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs);
int virtio_user_dev_create_shadow_cvq(struct virtio_user_dev *dev, struct virtqueue *vq);
void virtio_user_dev_destroy_shadow_cvq(struct virtio_user_dev *dev);
int virtio_user_dev_set_status(struct virtio_user_dev *dev, uint8_t status);
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 626bd95b62..d23959e836 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -266,10 +266,8 @@ virtio_user_notify_queue(struct virtio_hw *hw, struct virtqueue *vq)
struct virtio_user_dev *dev = virtio_user_get_dev(hw);
if (hw->cvq && (virtnet_cq_to_vq(hw->cvq) == vq)) {
- if (virtio_with_packed_queue(vq->hw))
- virtio_user_handle_cq_packed(dev, vq->vq_queue_index);
- else
- virtio_user_handle_cq(dev, vq->vq_queue_index);
+ virtio_user_handle_cq(dev, vq->vq_queue_index);
+
return;
}
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 20/21] net/virtio-user: advertize control VQ support with vDPA
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (18 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 19/21] net/virtio-user: forward control messages to shadow queue Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-31 5:24 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 21/21] net/virtio-user: remove max queues limitation Maxime Coquelin
2023-01-30 5:57 ` [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Xia, Chenbo
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
This patch advertizes control virtqueue support by the vDPA
backend if it supports VIRTIO_NET_F_CTRL_VQ.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_user/vhost_vdpa.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/vhost_vdpa.c b/drivers/net/virtio/virtio_user/vhost_vdpa.c
index 3fd13d9fac..7bb4995893 100644
--- a/drivers/net/virtio/virtio_user/vhost_vdpa.c
+++ b/drivers/net/virtio/virtio_user/vhost_vdpa.c
@@ -135,8 +135,8 @@ vhost_vdpa_get_features(struct virtio_user_dev *dev, uint64_t *features)
return -1;
}
- /* Multiqueue not supported for now */
- *features &= ~(1ULL << VIRTIO_NET_F_MQ);
+ if (*features & 1ULL << VIRTIO_NET_F_CTRL_VQ)
+ dev->hw_cvq = true;
/* Negotiated vDPA backend features */
ret = vhost_vdpa_get_protocol_features(dev, &data->protocol_features);
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v1 21/21] net/virtio-user: remove max queues limitation
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (19 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 20/21] net/virtio-user: advertize control VQ support with vDPA Maxime Coquelin
@ 2022-11-30 15:56 ` Maxime Coquelin
2023-01-31 5:19 ` Xia, Chenbo
2023-01-30 5:57 ` [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Xia, Chenbo
21 siblings, 1 reply; 48+ messages in thread
From: Maxime Coquelin @ 2022-11-30 15:56 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, eperezma; +Cc: Maxime Coquelin
This patch removes the limitation of 8 queue pairs by
dynamically allocating vring metadata once we know the
maximum number of queue pairs supported by the backend.
This is especially useful for Vhost-vDPA with physical
devices, where the maximum queues supported may be much
more than 8 pairs.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio.h | 6 -
.../net/virtio/virtio_user/virtio_user_dev.c | 118 ++++++++++++++----
.../net/virtio/virtio_user/virtio_user_dev.h | 16 +--
drivers/net/virtio/virtio_user_ethdev.c | 17 +--
4 files changed, 109 insertions(+), 48 deletions(-)
diff --git a/drivers/net/virtio/virtio.h b/drivers/net/virtio/virtio.h
index 5c8f71a44d..04a897bf51 100644
--- a/drivers/net/virtio/virtio.h
+++ b/drivers/net/virtio/virtio.h
@@ -124,12 +124,6 @@
VIRTIO_NET_HASH_TYPE_UDP_EX)
-/*
- * Maximum number of virtqueues per device.
- */
-#define VIRTIO_MAX_VIRTQUEUE_PAIRS 8
-#define VIRTIO_MAX_VIRTQUEUES (VIRTIO_MAX_VIRTQUEUE_PAIRS * 2 + 1)
-
/* VirtIO device IDs. */
#define VIRTIO_ID_NETWORK 0x01
#define VIRTIO_ID_BLOCK 0x02
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 7c48c9bb29..aa24fdea70 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -17,6 +17,7 @@
#include <rte_alarm.h>
#include <rte_string_fns.h>
#include <rte_eal_memconfig.h>
+#include <rte_malloc.h>
#include "vhost.h"
#include "virtio_user_dev.h"
@@ -58,8 +59,8 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
int ret;
struct vhost_vring_file file;
struct vhost_vring_state state;
- struct vring *vring = &dev->vrings[queue_sel];
- struct vring_packed *pq_vring = &dev->packed_vrings[queue_sel];
+ struct vring *vring = &dev->vrings.split[queue_sel];
+ struct vring_packed *pq_vring = &dev->vrings.packed[queue_sel];
struct vhost_vring_addr addr = {
.index = queue_sel,
.log_guest_addr = 0,
@@ -299,18 +300,6 @@ virtio_user_dev_init_max_queue_pairs(struct virtio_user_dev *dev, uint32_t user_
return ret;
}
- if (dev->max_queue_pairs > VIRTIO_MAX_VIRTQUEUE_PAIRS) {
- /*
- * If the device supports control queue, the control queue
- * index is max_virtqueue_pairs * 2. Disable MQ if it happens.
- */
- PMD_DRV_LOG(ERR, "(%s) Device advertises too many queues (%u, max supported %u)",
- dev->path, dev->max_queue_pairs, VIRTIO_MAX_VIRTQUEUE_PAIRS);
- dev->max_queue_pairs = 1;
-
- return -1;
- }
-
return 0;
}
@@ -579,6 +568,86 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
return 0;
}
+static int
+virtio_user_alloc_vrings(struct virtio_user_dev *dev)
+{
+ int i, size, nr_vrings;
+
+ nr_vrings = dev->max_queue_pairs * 2;
+ if (dev->hw_cvq)
+ nr_vrings++;
+
+ dev->callfds = rte_zmalloc("virtio_user_dev", nr_vrings * sizeof(*dev->callfds), 0);
+ if (!dev->callfds) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc callfds", dev->path);
+ return -1;
+ }
+
+ dev->kickfds = rte_zmalloc("virtio_user_dev", nr_vrings * sizeof(*dev->kickfds), 0);
+ if (!dev->kickfds) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc kickfds", dev->path);
+ goto free_callfds;
+ }
+
+ for (i = 0; i < nr_vrings; i++) {
+ dev->callfds[i] = -1;
+ dev->kickfds[i] = -1;
+ }
+
+ size = RTE_MAX(sizeof(*dev->vrings.split), sizeof(*dev->vrings.packed));
+ dev->vrings.ptr = rte_zmalloc("virtio_user_dev", nr_vrings * size, 0);
+ if (!dev->vrings.ptr) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc vrings metadata", dev->path);
+ goto free_kickfds;
+ }
+
+ dev->packed_queues = rte_zmalloc("virtio_user_dev",
+ nr_vrings * sizeof(*dev->packed_queues), 0);
+ if (!dev->packed_queues) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc packed queues metadata", dev->path);
+ goto free_vrings;
+ }
+
+ dev->qp_enabled = rte_zmalloc("virtio_user_dev",
+ dev->max_queue_pairs * sizeof(*dev->qp_enabled), 0);
+ if (!dev->qp_enabled) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc QP enable states", dev->path);
+ goto free_packed_queues;
+ }
+
+ return 0;
+
+free_packed_queues:
+ rte_free(dev->packed_queues);
+ dev->packed_queues = NULL;
+free_vrings:
+ rte_free(dev->vrings.ptr);
+ dev->vrings.ptr = NULL;
+free_kickfds:
+ rte_free(dev->kickfds);
+ dev->kickfds = NULL;
+free_callfds:
+ rte_free(dev->callfds);
+ dev->callfds = NULL;
+
+ return -1;
+}
+
+static void
+virtio_user_free_vrings(struct virtio_user_dev *dev)
+{
+ rte_free(dev->qp_enabled);
+ dev->qp_enabled = NULL;
+ rte_free(dev->packed_queues);
+ dev->packed_queues = NULL;
+ rte_free(dev->vrings.ptr);
+ dev->vrings.ptr = NULL;
+ rte_free(dev->kickfds);
+ dev->kickfds = NULL;
+ rte_free(dev->callfds);
+ dev->callfds = NULL;
+}
+
/* Use below macro to filter features from vhost backend */
#define VIRTIO_USER_SUPPORTED_FEATURES \
(1ULL << VIRTIO_NET_F_MAC | \
@@ -607,16 +676,10 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
enum virtio_user_backend_type backend_type)
{
uint64_t backend_features;
- int i;
pthread_mutex_init(&dev->mutex, NULL);
strlcpy(dev->path, path, PATH_MAX);
- for (i = 0; i < VIRTIO_MAX_VIRTQUEUES; i++) {
- dev->kickfds[i] = -1;
- dev->callfds[i] = -1;
- }
-
dev->started = 0;
dev->queue_pairs = 1; /* mq disabled by default */
dev->queue_size = queue_size;
@@ -661,9 +724,14 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
if (dev->max_queue_pairs > 1)
cq = 1;
+ if (virtio_user_alloc_vrings(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to allocate vring metadata", dev->path);
+ goto destroy;
+ }
+
if (virtio_user_dev_init_notify(dev) < 0) {
PMD_INIT_LOG(ERR, "(%s) Failed to init notifiers", dev->path);
- goto destroy;
+ goto free_vrings;
}
if (virtio_user_fill_intr_handle(dev) < 0) {
@@ -722,6 +790,8 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
notify_uninit:
virtio_user_dev_uninit_notify(dev);
+free_vrings:
+ virtio_user_free_vrings(dev);
destroy:
dev->ops->destroy(dev);
@@ -742,6 +812,8 @@ virtio_user_dev_uninit(struct virtio_user_dev *dev)
virtio_user_dev_uninit_notify(dev);
+ virtio_user_free_vrings(dev);
+
free(dev->ifname);
if (dev->is_server)
@@ -897,7 +969,7 @@ static void
virtio_user_handle_cq_packed(struct virtio_user_dev *dev, uint16_t queue_idx)
{
struct virtio_user_queue *vq = &dev->packed_queues[queue_idx];
- struct vring_packed *vring = &dev->packed_vrings[queue_idx];
+ struct vring_packed *vring = &dev->vrings.packed[queue_idx];
uint16_t n_descs, flags;
/* Perform a load-acquire barrier in desc_is_avail to
@@ -931,7 +1003,7 @@ virtio_user_handle_cq_split(struct virtio_user_dev *dev, uint16_t queue_idx)
uint16_t avail_idx, desc_idx;
struct vring_used_elem *uep;
uint32_t n_descs;
- struct vring *vring = &dev->vrings[queue_idx];
+ struct vring *vring = &dev->vrings.split[queue_idx];
/* Consume avail ring, using used ring idx as first one */
while (__atomic_load_n(&vring->used->idx, __ATOMIC_RELAXED)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h
index e8753f6019..7323d88302 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
@@ -29,8 +29,8 @@ struct virtio_user_dev {
enum virtio_user_backend_type backend_type;
bool is_server; /* server or client mode */
- int callfds[VIRTIO_MAX_VIRTQUEUES];
- int kickfds[VIRTIO_MAX_VIRTQUEUES];
+ int *callfds;
+ int *kickfds;
int mac_specified;
uint16_t max_queue_pairs;
uint16_t queue_pairs;
@@ -48,11 +48,13 @@ struct virtio_user_dev {
char *ifname;
union {
- struct vring vrings[VIRTIO_MAX_VIRTQUEUES];
- struct vring_packed packed_vrings[VIRTIO_MAX_VIRTQUEUES];
- };
- struct virtio_user_queue packed_queues[VIRTIO_MAX_VIRTQUEUES];
- bool qp_enabled[VIRTIO_MAX_VIRTQUEUE_PAIRS];
+ void *ptr;
+ struct vring *split;
+ struct vring_packed *packed;
+ } vrings;
+
+ struct virtio_user_queue *packed_queues;
+ bool *qp_enabled;
struct virtio_user_backend_ops *ops;
pthread_mutex_t mutex;
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index d23959e836..b1fc4d5d30 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -186,7 +186,7 @@ virtio_user_setup_queue_packed(struct virtqueue *vq,
uint64_t used_addr;
uint16_t i;
- vring = &dev->packed_vrings[queue_idx];
+ vring = &dev->vrings.packed[queue_idx];
desc_addr = (uintptr_t)vq->vq_ring_virt_mem;
avail_addr = desc_addr + vq->vq_nentries *
sizeof(struct vring_packed_desc);
@@ -216,10 +216,10 @@ virtio_user_setup_queue_split(struct virtqueue *vq, struct virtio_user_dev *dev)
ring[vq->vq_nentries]),
VIRTIO_VRING_ALIGN);
- dev->vrings[queue_idx].num = vq->vq_nentries;
- dev->vrings[queue_idx].desc = (void *)(uintptr_t)desc_addr;
- dev->vrings[queue_idx].avail = (void *)(uintptr_t)avail_addr;
- dev->vrings[queue_idx].used = (void *)(uintptr_t)used_addr;
+ dev->vrings.split[queue_idx].num = vq->vq_nentries;
+ dev->vrings.split[queue_idx].desc = (void *)(uintptr_t)desc_addr;
+ dev->vrings.split[queue_idx].avail = (void *)(uintptr_t)avail_addr;
+ dev->vrings.split[queue_idx].used = (void *)(uintptr_t)used_addr;
}
static int
@@ -619,13 +619,6 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
}
}
- if (queues > VIRTIO_MAX_VIRTQUEUE_PAIRS) {
- PMD_INIT_LOG(ERR, "arg %s %" PRIu64 " exceeds the limit %u",
- VIRTIO_USER_ARG_QUEUES_NUM, queues,
- VIRTIO_MAX_VIRTQUEUE_PAIRS);
- goto end;
- }
-
if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_MRG_RXBUF) == 1) {
if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_MRG_RXBUF,
&get_integer_arg, &mrg_rxbuf) < 0) {
--
2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v1 19/21] net/virtio-user: forward control messages to shadow queue
2022-11-30 15:56 ` [PATCH v1 19/21] net/virtio-user: forward control messages to shadow queue Maxime Coquelin
@ 2022-11-30 16:54 ` Stephen Hemminger
2022-12-06 12:58 ` Maxime Coquelin
0 siblings, 1 reply; 48+ messages in thread
From: Stephen Hemminger @ 2022-11-30 16:54 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: dev, chenbo.xia, david.marchand, eperezma
On Wed, 30 Nov 2022 16:56:37 +0100
Maxime Coquelin <maxime.coquelin@redhat.com> wrote:
> + if (status != 0 || !dev->scvq)
> + goto out;
> +
> + status = virtio_send_command(&dev->scvq->cq, (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
> +out:
Maybe I am looking at the diff only, and not seeing something but.
This looks like just an if statement, why the goto here?
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v1 19/21] net/virtio-user: forward control messages to shadow queue
2022-11-30 16:54 ` Stephen Hemminger
@ 2022-12-06 12:58 ` Maxime Coquelin
0 siblings, 0 replies; 48+ messages in thread
From: Maxime Coquelin @ 2022-12-06 12:58 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, chenbo.xia, david.marchand, eperezma
On 11/30/22 17:54, Stephen Hemminger wrote:
> On Wed, 30 Nov 2022 16:56:37 +0100
> Maxime Coquelin <maxime.coquelin@redhat.com> wrote:
>
>> + if (status != 0 || !dev->scvq)
>> + goto out;
>> +
>> + status = virtio_send_command(&dev->scvq->cq, (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen);
>> +out:
>
> Maybe I am looking at the diff only, and not seeing something but.
>
> This looks like just an if statement, why the goto here?
>
The code was a bit more complex initially, but now it has been
simplified, I agree the goto does not make sense anymore.
I will rework it in v2.
Thanks,
Maxime
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
` (20 preceding siblings ...)
2022-11-30 15:56 ` [PATCH v1 21/21] net/virtio-user: remove max queues limitation Maxime Coquelin
@ 2023-01-30 5:57 ` Xia, Chenbo
2023-02-07 10:08 ` Maxime Coquelin
21 siblings, 1 reply; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-30 5:57 UTC (permalink / raw)
To: Coquelin, Maxime; +Cc: dev, david.marchand, eperezma
Hi Maxime,
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:56 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 00/21] Add control queue & MQ support to Virtio-user
> vDPA
>
> --
> 2.38.1
I see there is one virtio test failed on patchwork, could you check if
it's related?
Thanks,
Chenbo
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 10/21] net/virtio: alloc Rx SW ring only if vectorized path
2022-11-30 15:56 ` [PATCH v1 10/21] net/virtio: alloc Rx SW ring only if vectorized path Maxime Coquelin
@ 2023-01-30 7:49 ` Xia, Chenbo
2023-02-07 10:12 ` Maxime Coquelin
0 siblings, 1 reply; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-30 7:49 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
Hi Maxime,
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:56 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 10/21] net/virtio: alloc Rx SW ring only if vectorized
> path
>
> This patch only allocates the SW ring when vectorized
> datapath is used. It also moves the SW ring and fake mbuf
> in the virtnet_rx struct since this is Rx-only.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 88 ++++++++++++-------
> drivers/net/virtio/virtio_rxtx.c | 8 +-
> drivers/net/virtio/virtio_rxtx.h | 4 +-
> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
> .../net/virtio/virtio_rxtx_simple_altivec.c | 4 +-
> drivers/net/virtio/virtio_rxtx_simple_neon.c | 4 +-
> drivers/net/virtio/virtio_rxtx_simple_sse.c | 4 +-
> drivers/net/virtio/virtqueue.c | 6 +-
> drivers/net/virtio/virtqueue.h | 1 -
> 9 files changed, 72 insertions(+), 49 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_ethdev.c
> b/drivers/net/virtio/virtio_ethdev.c
> index 8b17b450ec..46dd5606f6 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -339,6 +339,47 @@ virtio_free_queue_headers(struct virtqueue *vq)
> *hdr_mem = 0;
> }
>
> +static int
> +virtio_rxq_sw_ring_alloc(struct virtqueue *vq, int numa_node)
> +{
> + void *sw_ring;
> + struct rte_mbuf *mbuf;
> + size_t size;
> +
> + /* SW ring is only used with vectorized datapath */
> + if (!vq->hw->use_vec_rx)
> + return 0;
> +
> + size = (RTE_PMD_VIRTIO_RX_MAX_BURST + vq->vq_nentries) * sizeof(vq-
> >rxq.sw_ring[0]);
> +
> + sw_ring = rte_zmalloc_socket("sw_ring", size, RTE_CACHE_LINE_SIZE,
> numa_node);
> + if (!sw_ring) {
> + PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
> + return -ENOMEM;
> + }
> +
> + mbuf = rte_zmalloc_socket("sw_ring", sizeof(*mbuf),
> RTE_CACHE_LINE_SIZE, numa_node);
> + if (!mbuf) {
> + PMD_INIT_LOG(ERR, "can not allocate fake mbuf");
> + rte_free(sw_ring);
> + return -ENOMEM;
> + }
> +
> + vq->rxq.sw_ring = sw_ring;
> + vq->rxq.fake_mbuf = mbuf;
> +
> + return 0;
> +}
> +
> +static void
> +virtio_rxq_sw_ring_free(struct virtqueue *vq)
> +{
> + rte_free(vq->rxq.fake_mbuf);
> + vq->rxq.fake_mbuf = NULL;
> + rte_free(vq->rxq.sw_ring);
> + vq->rxq.sw_ring = NULL;
> +}
> +
> static int
> virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
> {
> @@ -346,14 +387,11 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t
> queue_idx)
> const struct rte_memzone *mz = NULL;
> unsigned int vq_size, size;
> struct virtio_hw *hw = dev->data->dev_private;
> - struct virtnet_rx *rxvq = NULL;
> struct virtnet_ctl *cvq = NULL;
> struct virtqueue *vq;
> - void *sw_ring = NULL;
> int queue_type = virtio_get_queue_type(hw, queue_idx);
> int ret;
> int numa_node = dev->device->numa_node;
> - struct rte_mbuf *fake_mbuf = NULL;
>
> PMD_INIT_LOG(INFO, "setting up queue: %u on NUMA node %d",
> queue_idx, numa_node);
> @@ -441,28 +479,9 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t
> queue_idx)
> }
>
> if (queue_type == VTNET_RQ) {
> - size_t sz_sw = (RTE_PMD_VIRTIO_RX_MAX_BURST + vq_size) *
> - sizeof(vq->sw_ring[0]);
> -
> - sw_ring = rte_zmalloc_socket("sw_ring", sz_sw,
> - RTE_CACHE_LINE_SIZE, numa_node);
> - if (!sw_ring) {
> - PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
> - ret = -ENOMEM;
> + ret = virtio_rxq_sw_ring_alloc(vq, numa_node);
> + if (ret)
> goto free_hdr_mz;
> - }
> -
> - fake_mbuf = rte_zmalloc_socket("sw_ring", sizeof(*fake_mbuf),
> - RTE_CACHE_LINE_SIZE, numa_node);
> - if (!fake_mbuf) {
> - PMD_INIT_LOG(ERR, "can not allocate fake mbuf");
> - ret = -ENOMEM;
> - goto free_sw_ring;
> - }
> -
> - vq->sw_ring = sw_ring;
> - rxvq = &vq->rxq;
> - rxvq->fake_mbuf = fake_mbuf;
> } else if (queue_type == VTNET_TQ) {
> virtqueue_txq_indirect_headers_init(vq);
> } else if (queue_type == VTNET_CQ) {
> @@ -486,9 +505,8 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t
> queue_idx)
>
> clean_vq:
> hw->cvq = NULL;
> - rte_free(fake_mbuf);
> -free_sw_ring:
> - rte_free(sw_ring);
> + if (queue_type == VTNET_RQ)
> + virtio_rxq_sw_ring_free(vq);
> free_hdr_mz:
> virtio_free_queue_headers(vq);
> free_mz:
> @@ -519,7 +537,7 @@ virtio_free_queues(struct virtio_hw *hw)
> queue_type = virtio_get_queue_type(hw, i);
> if (queue_type == VTNET_RQ) {
> rte_free(vq->rxq.fake_mbuf);
> - rte_free(vq->sw_ring);
> + rte_free(vq->rxq.sw_ring);
> }
>
> virtio_free_queue_headers(vq);
> @@ -2195,6 +2213,11 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
>
> rte_spinlock_init(&hw->state_lock);
>
> + if (vectorized) {
> + hw->use_vec_rx = 1;
> + hw->use_vec_tx = 1;
> + }
> +
> /* reset device and negotiate default features */
> ret = virtio_init_device(eth_dev, VIRTIO_PMD_DEFAULT_GUEST_FEATURES);
> if (ret < 0)
> @@ -2202,12 +2225,11 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
>
> if (vectorized) {
> if (!virtio_with_packed_queue(hw)) {
> - hw->use_vec_rx = 1;
> + hw->use_vec_tx = 0;
> } else {
> -#if defined(CC_AVX512_SUPPORT) || defined(RTE_ARCH_ARM)
> - hw->use_vec_rx = 1;
> - hw->use_vec_tx = 1;
> -#else
> +#if !defined(CC_AVX512_SUPPORT) && !defined(RTE_ARCH_ARM)
> + hw->use_vec_rx = 0;
> + hw->use_vec_tx = 0;
> PMD_DRV_LOG(INFO,
> "building environment do not support packed ring
> vectorized");
> #endif
> diff --git a/drivers/net/virtio/virtio_rxtx.c
> b/drivers/net/virtio/virtio_rxtx.c
> index 4f69b97f41..2d0afd3302 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -737,9 +737,11 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev
> *dev, uint16_t queue_idx)
> virtio_rxq_vec_setup(rxvq);
> }
>
> - memset(rxvq->fake_mbuf, 0, sizeof(*rxvq->fake_mbuf));
> - for (desc_idx = 0; desc_idx < RTE_PMD_VIRTIO_RX_MAX_BURST;
> desc_idx++)
> - vq->sw_ring[vq->vq_nentries + desc_idx] = rxvq->fake_mbuf;
> + if (hw->use_vec_rx) {
> + memset(rxvq->fake_mbuf, 0, sizeof(*rxvq->fake_mbuf));
> + for (desc_idx = 0; desc_idx < RTE_PMD_VIRTIO_RX_MAX_BURST;
> desc_idx++)
> + vq->rxq.sw_ring[vq->vq_nentries + desc_idx] = rxvq-
> >fake_mbuf;
> + }
>
> if (hw->use_vec_rx && !virtio_with_packed_queue(hw)) {
> while (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> diff --git a/drivers/net/virtio/virtio_rxtx.h
> b/drivers/net/virtio/virtio_rxtx.h
> index 57af630110..afc4b74534 100644
> --- a/drivers/net/virtio/virtio_rxtx.h
> +++ b/drivers/net/virtio/virtio_rxtx.h
> @@ -18,8 +18,8 @@ struct virtnet_stats {
> };
>
> struct virtnet_rx {
> - /* dummy mbuf, for wraparound when processing RX ring. */
> - struct rte_mbuf *fake_mbuf;
> + struct rte_mbuf **sw_ring; /**< RX software ring. */
> + struct rte_mbuf *fake_mbuf; /**< dummy mbuf, for wraparound when
> processing RX ring. */
> uint64_t mbuf_initializer; /**< value to init mbufs. */
> struct rte_mempool *mpool; /**< mempool for mbuf allocation */
>
> diff --git a/drivers/net/virtio/virtio_rxtx_simple.h
> b/drivers/net/virtio/virtio_rxtx_simple.h
> index 8e235f4dbc..79196ed86e 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple.h
> +++ b/drivers/net/virtio/virtio_rxtx_simple.h
> @@ -26,7 +26,7 @@ virtio_rxq_rearm_vec(struct virtnet_rx *rxvq)
> struct virtqueue *vq = virtnet_rxq_to_vq(rxvq);
>
> desc_idx = vq->vq_avail_idx & (vq->vq_nentries - 1);
> - sw_ring = &vq->sw_ring[desc_idx];
> + sw_ring = &vq->rxq.sw_ring[desc_idx];
> start_dp = &vq->vq_split.ring.desc[desc_idx];
>
> ret = rte_mempool_get_bulk(rxvq->mpool, (void **)sw_ring,
> diff --git a/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> index e7f0ed6068..7910efc153 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> +++ b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> @@ -103,8 +103,8 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf
> **rx_pkts,
>
> desc_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
> rused = &vq->vq_split.ring.used->ring[desc_idx];
> - sw_ring = &vq->sw_ring[desc_idx];
> - sw_ring_end = &vq->sw_ring[vq->vq_nentries];
> + sw_ring = &vq->rxq.sw_ring[desc_idx];
After sw_ring, there are two spaces, should be only one.
> + sw_ring_end = &vq->rxq.sw_ring[vq->vq_nentries];
>
> rte_prefetch0(rused);
>
> diff --git a/drivers/net/virtio/virtio_rxtx_simple_neon.c
> b/drivers/net/virtio/virtio_rxtx_simple_neon.c
> index 7fd92d1b0c..ffaa139bd6 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple_neon.c
> +++ b/drivers/net/virtio/virtio_rxtx_simple_neon.c
> @@ -101,8 +101,8 @@ virtio_recv_pkts_vec(void *rx_queue,
>
> desc_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
> rused = &vq->vq_split.ring.used->ring[desc_idx];
> - sw_ring = &vq->sw_ring[desc_idx];
> - sw_ring_end = &vq->sw_ring[vq->vq_nentries];
> + sw_ring = &vq->rxq.sw_ring[desc_idx];
Ditto
> + sw_ring_end = &vq->rxq.sw_ring[vq->vq_nentries];
>
> rte_prefetch_non_temporal(rused);
>
> diff --git a/drivers/net/virtio/virtio_rxtx_simple_sse.c
> b/drivers/net/virtio/virtio_rxtx_simple_sse.c
> index 7577f5e86d..ed608fbf2e 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple_sse.c
> +++ b/drivers/net/virtio/virtio_rxtx_simple_sse.c
> @@ -101,8 +101,8 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf
> **rx_pkts,
>
> desc_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
> rused = &vq->vq_split.ring.used->ring[desc_idx];
> - sw_ring = &vq->sw_ring[desc_idx];
> - sw_ring_end = &vq->sw_ring[vq->vq_nentries];
> + sw_ring = &vq->rxq.sw_ring[desc_idx];
Ditto
Thanks,
Chenbo
> + sw_ring_end = &vq->rxq.sw_ring[vq->vq_nentries];
>
> rte_prefetch0(rused);
>
> diff --git a/drivers/net/virtio/virtqueue.c
> b/drivers/net/virtio/virtqueue.c
> index fb651a4ca3..7a84796513 100644
> --- a/drivers/net/virtio/virtqueue.c
> +++ b/drivers/net/virtio/virtqueue.c
> @@ -38,9 +38,9 @@ virtqueue_detach_unused(struct virtqueue *vq)
> continue;
> if (start > end && (idx >= start || idx < end))
> continue;
> - cookie = vq->sw_ring[idx];
> + cookie = vq->rxq.sw_ring[idx];
> if (cookie != NULL) {
> - vq->sw_ring[idx] = NULL;
> + vq->rxq.sw_ring[idx] = NULL;
> return cookie;
> }
> } else {
> @@ -100,7 +100,7 @@ virtqueue_rxvq_flush_split(struct virtqueue *vq)
> uep = &vq->vq_split.ring.used->ring[used_idx];
> if (hw->use_vec_rx) {
> desc_idx = used_idx;
> - rte_pktmbuf_free(vq->sw_ring[desc_idx]);
> + rte_pktmbuf_free(vq->rxq.sw_ring[desc_idx]);
> vq->vq_free_cnt++;
> } else if (hw->use_inorder_rx) {
> desc_idx = (uint16_t)uep->id;
> diff --git a/drivers/net/virtio/virtqueue.h
> b/drivers/net/virtio/virtqueue.h
> index d453c3ec26..d7f8ee79bb 100644
> --- a/drivers/net/virtio/virtqueue.h
> +++ b/drivers/net/virtio/virtqueue.h
> @@ -206,7 +206,6 @@ struct virtqueue {
> * or virtual address for virtio_user. */
>
> uint16_t *notify_addr;
> - struct rte_mbuf **sw_ring; /**< RX software ring. */
> struct vq_desc_extra vq_descx[];
> };
>
> --
> 2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 01/21] net/virtio: move CVQ code into a dedicated file
2022-11-30 15:56 ` [PATCH v1 01/21] net/virtio: move CVQ code into a dedicated file Maxime Coquelin
@ 2023-01-30 7:50 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-30 7:50 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:56 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 01/21] net/virtio: move CVQ code into a dedicated file
>
> This patch moves Virtio control queue code into a dedicated
> file, as preliminary rework to support shadow control queue
> in Virtio-user.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/meson.build | 1 +
> drivers/net/virtio/virtio_cvq.c | 230 +++++++++++++++++++++++++++++
> drivers/net/virtio/virtio_cvq.h | 126 ++++++++++++++++
> drivers/net/virtio/virtio_ethdev.c | 218 +--------------------------
> drivers/net/virtio/virtio_rxtx.h | 9 --
> drivers/net/virtio/virtqueue.h | 105 +------------
> 6 files changed, 359 insertions(+), 330 deletions(-)
> create mode 100644 drivers/net/virtio/virtio_cvq.c
> create mode 100644 drivers/net/virtio/virtio_cvq.h
>
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 02/21] net/virtio: introduce notify callback for control queue
2022-11-30 15:56 ` [PATCH v1 02/21] net/virtio: introduce notify callback for control queue Maxime Coquelin
@ 2023-01-30 7:51 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-30 7:51 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:56 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 02/21] net/virtio: introduce notify callback for
> control queue
>
> This patch introduces a notification callback for the control
> virtqueue as preliminary work to add shadow control virtqueue
> support.
>
> This new callback is required so that the shadow control queue
> implemented in Virtio-user does not call the notifciation op
> implemented for the driver layer.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_cvq.c | 4 ++--
> drivers/net/virtio/virtio_cvq.h | 4 ++++
> drivers/net/virtio/virtio_ethdev.c | 7 +++++++
> 3 files changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_cvq.c
> b/drivers/net/virtio/virtio_cvq.c
> index de4299a2a7..cd25614df8 100644
> --- a/drivers/net/virtio/virtio_cvq.c
> +++ b/drivers/net/virtio/virtio_cvq.c
> @@ -76,7 +76,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> vq->hw->weak_barriers);
>
> virtio_wmb(vq->hw->weak_barriers);
> - virtqueue_notify(vq);
> + cvq->notify_queue(vq, cvq->notify_cookie);
>
> /* wait for used desc in virtqueue
> * desc_is_used has a load-acquire or rte_io_rmb inside
> @@ -155,7 +155,7 @@ virtio_send_command_split(struct virtnet_ctl *cvq,
>
> PMD_INIT_LOG(DEBUG, "vq->vq_queue_index = %d", vq->vq_queue_index);
>
> - virtqueue_notify(vq);
> + cvq->notify_queue(vq, cvq->notify_cookie);
>
> while (virtqueue_nused(vq) == 0)
> usleep(100);
> diff --git a/drivers/net/virtio/virtio_cvq.h
> b/drivers/net/virtio/virtio_cvq.h
> index 139e813ffb..224dc81422 100644
> --- a/drivers/net/virtio/virtio_cvq.h
> +++ b/drivers/net/virtio/virtio_cvq.h
> @@ -7,6 +7,8 @@
>
> #include <rte_ether.h>
>
> +struct virtqueue;
> +
> /**
> * Control the RX mode, ie. promiscuous, allmulti, etc...
> * All commands require an "out" sg entry containing a 1 byte
> @@ -110,6 +112,8 @@ struct virtnet_ctl {
> uint16_t port_id; /**< Device port identifier. */
> const struct rte_memzone *mz; /**< mem zone to populate CTL ring.
> */
> rte_spinlock_t lock; /**< spinlock for control queue.
> */
> + void (*notify_queue)(struct virtqueue *vq, void *cookie); /**<
> notify ops. */
> + void *notify_cookie; /**< cookie for notify ops */
> };
>
> #define VIRTIO_MAX_CTRL_DATA 2048
> diff --git a/drivers/net/virtio/virtio_ethdev.c
> b/drivers/net/virtio/virtio_ethdev.c
> index d553f89a0d..8db8771f4d 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -253,6 +253,12 @@ virtio_init_vring(struct virtqueue *vq)
> virtqueue_disable_intr(vq);
> }
>
> +static void
> +virtio_control_queue_notify(struct virtqueue *vq, __rte_unused void
> *cookie)
> +{
> + virtqueue_notify(vq);
> +}
> +
> static int
> virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
> {
> @@ -421,6 +427,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t
> queue_idx)
> memset(cvq->virtio_net_hdr_mz->addr, 0, rte_mem_page_size());
>
> hw->cvq = cvq;
> + vq->cq.notify_queue = &virtio_control_queue_notify;
> }
>
> if (hw->use_va)
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 03/21] net/virtio: virtqueue headers alloc refactoring
2022-11-30 15:56 ` [PATCH v1 03/21] net/virtio: virtqueue headers alloc refactoring Maxime Coquelin
@ 2023-01-30 7:51 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-30 7:51 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:56 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 03/21] net/virtio: virtqueue headers alloc refactoring
>
> This patch refactors virtqueue initialization by moving
> its headers allocation and deallocation in dedicated
> function.
>
> While at it, it renames the memzone metadata and address
> pointers in the virtnet_tx and virtnet_ctl structures to
> remove redundant virtio_net_ prefix.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_cvq.c | 19 ++--
> drivers/net/virtio/virtio_cvq.h | 9 +-
> drivers/net/virtio/virtio_ethdev.c | 149 ++++++++++++++++++-----------
> drivers/net/virtio/virtio_rxtx.c | 12 +--
> drivers/net/virtio/virtio_rxtx.h | 12 +--
> drivers/net/virtio/virtqueue.c | 8 +-
> drivers/net/virtio/virtqueue.h | 13 +--
> 7 files changed, 126 insertions(+), 96 deletions(-)
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 04/21] net/virtio: remove port ID info from Rx queue
2022-11-30 15:56 ` [PATCH v1 04/21] net/virtio: remove port ID info from Rx queue Maxime Coquelin
@ 2023-01-30 7:51 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-30 7:51 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:56 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 04/21] net/virtio: remove port ID info from Rx queue
>
> The port ID information is duplicated in several places.
> This patch removes it from the virtnet_rx struct as it can
> be found in virtio_hw struct.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 1 -
> drivers/net/virtio/virtio_rxtx.c | 25 ++++++++++---------------
> drivers/net/virtio/virtio_rxtx.h | 1 -
> drivers/net/virtio/virtio_rxtx_packed.c | 3 +--
> drivers/net/virtio/virtio_rxtx_simple.c | 3 ++-
> drivers/net/virtio/virtio_rxtx_simple.h | 5 +++--
> 6 files changed, 16 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_ethdev.c
> b/drivers/net/virtio/virtio_ethdev.c
> index cead5f0884..1c68e5a283 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -462,7 +462,6 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t
> queue_idx)
>
> vq->sw_ring = sw_ring;
> rxvq = &vq->rxq;
> - rxvq->port_id = dev->data->port_id;
> rxvq->mz = mz;
> rxvq->fake_mbuf = fake_mbuf;
> } else if (queue_type == VTNET_TQ) {
> diff --git a/drivers/net/virtio/virtio_rxtx.c
> b/drivers/net/virtio/virtio_rxtx.c
> index bd95e8ceb5..45c04aa3f8 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -1024,7 +1024,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf
> **rx_pkts, uint16_t nb_pkts)
> continue;
> }
>
> - rxm->port = rxvq->port_id;
> + rxm->port = hw->port_id;
> rxm->data_off = RTE_PKTMBUF_HEADROOM;
> rxm->ol_flags = 0;
> rxm->vlan_tci = 0;
> @@ -1066,8 +1066,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf
> **rx_pkts, uint16_t nb_pkts)
> }
> nb_enqueued += free_cnt;
> } else {
> - struct rte_eth_dev *dev =
> - &rte_eth_devices[rxvq->port_id];
> + struct rte_eth_dev *dev = &rte_eth_devices[hw->port_id];
> dev->data->rx_mbuf_alloc_failed += free_cnt;
> }
> }
> @@ -1127,7 +1126,7 @@ virtio_recv_pkts_packed(void *rx_queue, struct
> rte_mbuf **rx_pkts,
> continue;
> }
>
> - rxm->port = rxvq->port_id;
> + rxm->port = hw->port_id;
> rxm->data_off = RTE_PKTMBUF_HEADROOM;
> rxm->ol_flags = 0;
> rxm->vlan_tci = 0;
> @@ -1169,8 +1168,7 @@ virtio_recv_pkts_packed(void *rx_queue, struct
> rte_mbuf **rx_pkts,
> }
> nb_enqueued += free_cnt;
> } else {
> - struct rte_eth_dev *dev =
> - &rte_eth_devices[rxvq->port_id];
> + struct rte_eth_dev *dev = &rte_eth_devices[hw->port_id];
> dev->data->rx_mbuf_alloc_failed += free_cnt;
> }
> }
> @@ -1258,7 +1256,7 @@ virtio_recv_pkts_inorder(void *rx_queue,
> rxm->pkt_len = (uint32_t)(len[i] - hdr_size);
> rxm->data_len = (uint16_t)(len[i] - hdr_size);
>
> - rxm->port = rxvq->port_id;
> + rxm->port = hw->port_id;
>
> rx_pkts[nb_rx] = rxm;
> prev = rxm;
> @@ -1352,8 +1350,7 @@ virtio_recv_pkts_inorder(void *rx_queue,
> }
> nb_enqueued += free_cnt;
> } else {
> - struct rte_eth_dev *dev =
> - &rte_eth_devices[rxvq->port_id];
> + struct rte_eth_dev *dev = &rte_eth_devices[hw->port_id];
> dev->data->rx_mbuf_alloc_failed += free_cnt;
> }
> }
> @@ -1437,7 +1434,7 @@ virtio_recv_mergeable_pkts(void *rx_queue,
> rxm->pkt_len = (uint32_t)(len[i] - hdr_size);
> rxm->data_len = (uint16_t)(len[i] - hdr_size);
>
> - rxm->port = rxvq->port_id;
> + rxm->port = hw->port_id;
>
> rx_pkts[nb_rx] = rxm;
> prev = rxm;
> @@ -1530,8 +1527,7 @@ virtio_recv_mergeable_pkts(void *rx_queue,
> }
> nb_enqueued += free_cnt;
> } else {
> - struct rte_eth_dev *dev =
> - &rte_eth_devices[rxvq->port_id];
> + struct rte_eth_dev *dev = &rte_eth_devices[hw->port_id];
> dev->data->rx_mbuf_alloc_failed += free_cnt;
> }
> }
> @@ -1610,7 +1606,7 @@ virtio_recv_mergeable_pkts_packed(void *rx_queue,
> rxm->pkt_len = (uint32_t)(len[i] - hdr_size);
> rxm->data_len = (uint16_t)(len[i] - hdr_size);
>
> - rxm->port = rxvq->port_id;
> + rxm->port = hw->port_id;
> rx_pkts[nb_rx] = rxm;
> prev = rxm;
>
> @@ -1699,8 +1695,7 @@ virtio_recv_mergeable_pkts_packed(void *rx_queue,
> }
> nb_enqueued += free_cnt;
> } else {
> - struct rte_eth_dev *dev =
> - &rte_eth_devices[rxvq->port_id];
> + struct rte_eth_dev *dev = &rte_eth_devices[hw->port_id];
> dev->data->rx_mbuf_alloc_failed += free_cnt;
> }
> }
> diff --git a/drivers/net/virtio/virtio_rxtx.h
> b/drivers/net/virtio/virtio_rxtx.h
> index 226c722d64..97de9eb0a3 100644
> --- a/drivers/net/virtio/virtio_rxtx.h
> +++ b/drivers/net/virtio/virtio_rxtx.h
> @@ -24,7 +24,6 @@ struct virtnet_rx {
> struct rte_mempool *mpool; /**< mempool for mbuf allocation */
>
> uint16_t queue_id; /**< DPDK queue index. */
> - uint16_t port_id; /**< Device port identifier. */
>
> /* Statistics */
> struct virtnet_stats stats;
> diff --git a/drivers/net/virtio/virtio_rxtx_packed.c
> b/drivers/net/virtio/virtio_rxtx_packed.c
> index 45cf39df22..5f7d4903bc 100644
> --- a/drivers/net/virtio/virtio_rxtx_packed.c
> +++ b/drivers/net/virtio/virtio_rxtx_packed.c
> @@ -124,8 +124,7 @@ virtio_recv_pkts_packed_vec(void *rx_queue,
> free_cnt);
> nb_enqueued += free_cnt;
> } else {
> - struct rte_eth_dev *dev =
> - &rte_eth_devices[rxvq->port_id];
> + struct rte_eth_dev *dev = &rte_eth_devices[hw->port_id];
> dev->data->rx_mbuf_alloc_failed += free_cnt;
> }
> }
> diff --git a/drivers/net/virtio/virtio_rxtx_simple.c
> b/drivers/net/virtio/virtio_rxtx_simple.c
> index f248869a8f..438256970d 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple.c
> +++ b/drivers/net/virtio/virtio_rxtx_simple.c
> @@ -30,12 +30,13 @@
> int __rte_cold
> virtio_rxq_vec_setup(struct virtnet_rx *rxq)
> {
> + struct virtqueue *vq = virtnet_rxq_to_vq(rxq);
> uintptr_t p;
> struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
>
> mb_def.nb_segs = 1;
> mb_def.data_off = RTE_PKTMBUF_HEADROOM;
> - mb_def.port = rxq->port_id;
> + mb_def.port = vq->hw->port_id;
> rte_mbuf_refcnt_set(&mb_def, 1);
>
> /* prevent compiler reordering: rearm_data covers previous fields */
> diff --git a/drivers/net/virtio/virtio_rxtx_simple.h
> b/drivers/net/virtio/virtio_rxtx_simple.h
> index d8f96e0434..8e235f4dbc 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple.h
> +++ b/drivers/net/virtio/virtio_rxtx_simple.h
> @@ -32,8 +32,9 @@ virtio_rxq_rearm_vec(struct virtnet_rx *rxvq)
> ret = rte_mempool_get_bulk(rxvq->mpool, (void **)sw_ring,
> RTE_VIRTIO_VPMD_RX_REARM_THRESH);
> if (unlikely(ret)) {
> - rte_eth_devices[rxvq->port_id].data->rx_mbuf_alloc_failed +=
> - RTE_VIRTIO_VPMD_RX_REARM_THRESH;
> + struct rte_eth_dev *dev = &rte_eth_devices[vq->hw->port_id];
> +
> + dev->data->rx_mbuf_alloc_failed +=
> RTE_VIRTIO_VPMD_RX_REARM_THRESH;
> return;
> }
>
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 05/21] net/virtio: remove unused fields in Tx queue struct
2022-11-30 15:56 ` [PATCH v1 05/21] net/virtio: remove unused fields in Tx queue struct Maxime Coquelin
@ 2023-01-30 7:51 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-30 7:51 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:56 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 05/21] net/virtio: remove unused fields in Tx queue
> struct
>
> The port and queue IDs are not used in virtnet_tx struct,
> this patch removes them.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 1 -
> drivers/net/virtio/virtio_rxtx.c | 1 -
> drivers/net/virtio/virtio_rxtx.h | 3 ---
> 3 files changed, 5 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_ethdev.c
> b/drivers/net/virtio/virtio_ethdev.c
> index 1c68e5a283..a581fae408 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -466,7 +466,6 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t
> queue_idx)
> rxvq->fake_mbuf = fake_mbuf;
> } else if (queue_type == VTNET_TQ) {
> txvq = &vq->txq;
> - txvq->port_id = dev->data->port_id;
> txvq->mz = mz;
> } else if (queue_type == VTNET_CQ) {
> cvq = &vq->cq;
> diff --git a/drivers/net/virtio/virtio_rxtx.c
> b/drivers/net/virtio/virtio_rxtx.c
> index 45c04aa3f8..304403d46c 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -831,7 +831,6 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev,
> vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
>
> txvq = &vq->txq;
> - txvq->queue_id = queue_idx;
>
> tx_free_thresh = tx_conf->tx_free_thresh;
> if (tx_free_thresh == 0)
> diff --git a/drivers/net/virtio/virtio_rxtx.h
> b/drivers/net/virtio/virtio_rxtx.h
> index 97de9eb0a3..9bbcf32f66 100644
> --- a/drivers/net/virtio/virtio_rxtx.h
> +++ b/drivers/net/virtio/virtio_rxtx.h
> @@ -35,9 +35,6 @@ struct virtnet_tx {
> const struct rte_memzone *hdr_mz; /**< memzone to populate hdr. */
> rte_iova_t hdr_mem; /**< hdr for each xmit packet */
>
> - uint16_t queue_id; /**< DPDK queue index. */
> - uint16_t port_id; /**< Device port identifier. */
> -
> struct virtnet_stats stats; /* Statistics */
>
> const struct rte_memzone *mz; /**< mem zone to populate TX ring.
> */
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 06/21] net/virtio: remove unused queue ID field in Rx queue
2022-11-30 15:56 ` [PATCH v1 06/21] net/virtio: remove unused queue ID field in Rx queue Maxime Coquelin
@ 2023-01-30 7:52 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-30 7:52 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:56 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 06/21] net/virtio: remove unused queue ID field in Rx
> queue
>
> This patch removes the queue ID field in virtnet_rx struct.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_rxtx.c | 1 -
> drivers/net/virtio/virtio_rxtx.h | 2 --
> 2 files changed, 3 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_rxtx.c
> b/drivers/net/virtio/virtio_rxtx.c
> index 304403d46c..4f69b97f41 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -703,7 +703,6 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
> vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
>
> rxvq = &vq->rxq;
> - rxvq->queue_id = queue_idx;
> rxvq->mpool = mp;
> dev->data->rx_queues[queue_idx] = rxvq;
>
> diff --git a/drivers/net/virtio/virtio_rxtx.h
> b/drivers/net/virtio/virtio_rxtx.h
> index 9bbcf32f66..a5fe3ea95c 100644
> --- a/drivers/net/virtio/virtio_rxtx.h
> +++ b/drivers/net/virtio/virtio_rxtx.h
> @@ -23,8 +23,6 @@ struct virtnet_rx {
> uint64_t mbuf_initializer; /**< value to init mbufs. */
> struct rte_mempool *mpool; /**< mempool for mbuf allocation */
>
> - uint16_t queue_id; /**< DPDK queue index. */
> -
> /* Statistics */
> struct virtnet_stats stats;
>
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 07/21] net/virtio: remove unused Port ID in control queue
2022-11-30 15:56 ` [PATCH v1 07/21] net/virtio: remove unused Port ID in control queue Maxime Coquelin
@ 2023-01-30 7:52 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-30 7:52 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:56 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 07/21] net/virtio: remove unused Port ID in control
> queue
>
> This patch removes the unused port ID information from
> virtnet_ctl struct.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_cvq.h | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/drivers/net/virtio/virtio_cvq.h
> b/drivers/net/virtio/virtio_cvq.h
> index 226561e6b8..0ff326b063 100644
> --- a/drivers/net/virtio/virtio_cvq.h
> +++ b/drivers/net/virtio/virtio_cvq.h
> @@ -108,7 +108,6 @@ typedef uint8_t virtio_net_ctrl_ack;
> struct virtnet_ctl {
> const struct rte_memzone *hdr_mz; /**< memzone to populate hdr. */
> rte_iova_t hdr_mem; /**< hdr for each xmit packet */
> - uint16_t port_id; /**< Device port identifier. */
> const struct rte_memzone *mz; /**< mem zone to populate CTL ring.
> */
> rte_spinlock_t lock; /**< spinlock for control queue.
> */
> void (*notify_queue)(struct virtqueue *vq, void *cookie); /**<
> notify ops. */
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 08/21] net/virtio: move vring memzone to virtqueue struct
2022-11-30 15:56 ` [PATCH v1 08/21] net/virtio: move vring memzone to virtqueue struct Maxime Coquelin
@ 2023-01-30 7:52 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-30 7:52 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:56 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 08/21] net/virtio: move vring memzone to virtqueue
> struct
>
> Whatever its type (Rx, Tx or Ctl), all the virtqueue
> require a memzone for the vrings. This patch moves its
> pointer to the virtqueue struct, simplifying the code.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_cvq.h | 1 -
> drivers/net/virtio/virtio_ethdev.c | 11 ++---------
> drivers/net/virtio/virtio_rxtx.h | 4 ----
> drivers/net/virtio/virtqueue.c | 6 ++----
> drivers/net/virtio/virtqueue.h | 1 +
> 5 files changed, 5 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_cvq.h
> b/drivers/net/virtio/virtio_cvq.h
> index 0ff326b063..70739ae04b 100644
> --- a/drivers/net/virtio/virtio_cvq.h
> +++ b/drivers/net/virtio/virtio_cvq.h
> @@ -108,7 +108,6 @@ typedef uint8_t virtio_net_ctrl_ack;
> struct virtnet_ctl {
> const struct rte_memzone *hdr_mz; /**< memzone to populate hdr. */
> rte_iova_t hdr_mem; /**< hdr for each xmit packet */
> - const struct rte_memzone *mz; /**< mem zone to populate CTL ring.
> */
> rte_spinlock_t lock; /**< spinlock for control queue.
> */
> void (*notify_queue)(struct virtqueue *vq, void *cookie); /**<
> notify ops. */
> void *notify_cookie; /**< cookie for notify ops */
> diff --git a/drivers/net/virtio/virtio_ethdev.c
> b/drivers/net/virtio/virtio_ethdev.c
> index a581fae408..b546916a9f 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -423,6 +423,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t
> queue_idx)
>
> memset(mz->addr, 0, mz->len);
>
> + vq->mz = mz;
> if (hw->use_va)
> vq->vq_ring_mem = (uintptr_t)mz->addr;
> else
> @@ -462,14 +463,11 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t
> queue_idx)
>
> vq->sw_ring = sw_ring;
> rxvq = &vq->rxq;
> - rxvq->mz = mz;
> rxvq->fake_mbuf = fake_mbuf;
> } else if (queue_type == VTNET_TQ) {
> txvq = &vq->txq;
> - txvq->mz = mz;
> } else if (queue_type == VTNET_CQ) {
> cvq = &vq->cq;
> - cvq->mz = mz;
> hw->cvq = cvq;
> vq->cq.notify_queue = &virtio_control_queue_notify;
> }
> @@ -550,15 +548,10 @@ virtio_free_queues(struct virtio_hw *hw)
> if (queue_type == VTNET_RQ) {
> rte_free(vq->rxq.fake_mbuf);
> rte_free(vq->sw_ring);
> - rte_memzone_free(vq->rxq.mz);
> - } else if (queue_type == VTNET_TQ) {
> - rte_memzone_free(vq->txq.mz);
> - } else {
> - rte_memzone_free(vq->cq.mz);
> }
>
> virtio_free_queue_headers(vq);
> -
> + rte_memzone_free(vq->mz);
> rte_free(vq);
> hw->vqs[i] = NULL;
> }
> diff --git a/drivers/net/virtio/virtio_rxtx.h
> b/drivers/net/virtio/virtio_rxtx.h
> index a5fe3ea95c..57af630110 100644
> --- a/drivers/net/virtio/virtio_rxtx.h
> +++ b/drivers/net/virtio/virtio_rxtx.h
> @@ -25,8 +25,6 @@ struct virtnet_rx {
>
> /* Statistics */
> struct virtnet_stats stats;
> -
> - const struct rte_memzone *mz; /**< mem zone to populate RX ring. */
> };
>
> struct virtnet_tx {
> @@ -34,8 +32,6 @@ struct virtnet_tx {
> rte_iova_t hdr_mem; /**< hdr for each xmit packet */
>
> struct virtnet_stats stats; /* Statistics */
> -
> - const struct rte_memzone *mz; /**< mem zone to populate TX ring.
> */
> };
>
> int virtio_rxq_vec_setup(struct virtnet_rx *rxvq);
> diff --git a/drivers/net/virtio/virtqueue.c
> b/drivers/net/virtio/virtqueue.c
> index 3b174a5923..41e3529546 100644
> --- a/drivers/net/virtio/virtqueue.c
> +++ b/drivers/net/virtio/virtqueue.c
> @@ -148,7 +148,6 @@ virtqueue_rxvq_reset_packed(struct virtqueue *vq)
> {
> int size = vq->vq_nentries;
> struct vq_desc_extra *dxp;
> - struct virtnet_rx *rxvq;
> uint16_t desc_idx;
>
> vq->vq_used_cons_idx = 0;
> @@ -162,8 +161,7 @@ virtqueue_rxvq_reset_packed(struct virtqueue *vq)
> vq->vq_packed.event_flags_shadow = 0;
> vq->vq_packed.cached_flags |= VRING_DESC_F_WRITE;
>
> - rxvq = &vq->rxq;
> - memset(rxvq->mz->addr, 0, rxvq->mz->len);
> + memset(vq->mz->addr, 0, vq->mz->len);
>
> for (desc_idx = 0; desc_idx < vq->vq_nentries; desc_idx++) {
> dxp = &vq->vq_descx[desc_idx];
> @@ -201,7 +199,7 @@ virtqueue_txvq_reset_packed(struct virtqueue *vq)
>
> txvq = &vq->txq;
> txr = txvq->hdr_mz->addr;
> - memset(txvq->mz->addr, 0, txvq->mz->len);
> + memset(vq->mz->addr, 0, vq->mz->len);
> memset(txvq->hdr_mz->addr, 0, txvq->hdr_mz->len);
>
> for (desc_idx = 0; desc_idx < vq->vq_nentries; desc_idx++) {
> diff --git a/drivers/net/virtio/virtqueue.h
> b/drivers/net/virtio/virtqueue.h
> index f5058f362c..8b7bfae643 100644
> --- a/drivers/net/virtio/virtqueue.h
> +++ b/drivers/net/virtio/virtqueue.h
> @@ -201,6 +201,7 @@ struct virtqueue {
> struct virtnet_ctl cq;
> };
>
> + const struct rte_memzone *mz; /**< mem zone to populate ring. */
> rte_iova_t vq_ring_mem; /**< physical address of vring,
> * or virtual address for virtio_user. */
>
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 09/21] net/virtio: refactor indirect desc headers init
2022-11-30 15:56 ` [PATCH v1 09/21] net/virtio: refactor indirect desc headers init Maxime Coquelin
@ 2023-01-30 7:52 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-30 7:52 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:56 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 09/21] net/virtio: refactor indirect desc headers init
>
> This patch refactors the indirect descriptors headers
> initialization in a dedicated function, and makes it used
> by both queue init and reset functions.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 30 +------------
> drivers/net/virtio/virtqueue.c | 68 ++++++++++++++++++++++--------
> drivers/net/virtio/virtqueue.h | 2 +
> 3 files changed, 54 insertions(+), 46 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_ethdev.c
> b/drivers/net/virtio/virtio_ethdev.c
> index b546916a9f..8b17b450ec 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -347,7 +347,6 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t
> queue_idx)
> unsigned int vq_size, size;
> struct virtio_hw *hw = dev->data->dev_private;
> struct virtnet_rx *rxvq = NULL;
> - struct virtnet_tx *txvq = NULL;
> struct virtnet_ctl *cvq = NULL;
> struct virtqueue *vq;
> void *sw_ring = NULL;
> @@ -465,7 +464,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t
> queue_idx)
> rxvq = &vq->rxq;
> rxvq->fake_mbuf = fake_mbuf;
> } else if (queue_type == VTNET_TQ) {
> - txvq = &vq->txq;
> + virtqueue_txq_indirect_headers_init(vq);
> } else if (queue_type == VTNET_CQ) {
> cvq = &vq->cq;
> hw->cvq = cvq;
> @@ -477,33 +476,6 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t
> queue_idx)
> else
> vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_iova);
>
> - if (queue_type == VTNET_TQ) {
> - struct virtio_tx_region *txr;
> - unsigned int i;
> -
> - txr = txvq->hdr_mz->addr;
> - for (i = 0; i < vq_size; i++) {
> - /* first indirect descriptor is always the tx header */
> - if (!virtio_with_packed_queue(hw)) {
> - struct vring_desc *start_dp = txr[i].tx_indir;
> - vring_desc_init_split(start_dp,
> - RTE_DIM(txr[i].tx_indir));
> - start_dp->addr = txvq->hdr_mem + i * sizeof(*txr)
> - + offsetof(struct virtio_tx_region, tx_hdr);
> - start_dp->len = hw->vtnet_hdr_size;
> - start_dp->flags = VRING_DESC_F_NEXT;
> - } else {
> - struct vring_packed_desc *start_dp =
> - txr[i].tx_packed_indir;
> - vring_desc_init_indirect_packed(start_dp,
> - RTE_DIM(txr[i].tx_packed_indir));
> - start_dp->addr = txvq->hdr_mem + i * sizeof(*txr)
> - + offsetof(struct virtio_tx_region, tx_hdr);
> - start_dp->len = hw->vtnet_hdr_size;
> - }
> - }
> - }
> -
> if (VIRTIO_OPS(hw)->setup_queue(hw, vq) < 0) {
> PMD_INIT_LOG(ERR, "setup_queue failed");
> ret = -EINVAL;
> diff --git a/drivers/net/virtio/virtqueue.c
> b/drivers/net/virtio/virtqueue.c
> index 41e3529546..fb651a4ca3 100644
> --- a/drivers/net/virtio/virtqueue.c
> +++ b/drivers/net/virtio/virtqueue.c
> @@ -143,6 +143,54 @@ virtqueue_rxvq_flush(struct virtqueue *vq)
> virtqueue_rxvq_flush_split(vq);
> }
>
> +static void
> +virtqueue_txq_indirect_header_init_packed(struct virtqueue *vq, uint32_t
> idx)
> +{
> + struct virtio_tx_region *txr;
> + struct vring_packed_desc *desc;
> + rte_iova_t hdr_mem;
> +
> + txr = vq->txq.hdr_mz->addr;
> + hdr_mem = vq->txq.hdr_mem;
> + desc = txr[idx].tx_packed_indir;
> +
> + vring_desc_init_indirect_packed(desc,
> RTE_DIM(txr[idx].tx_packed_indir));
> + desc->addr = hdr_mem + idx * sizeof(*txr) + offsetof(struct
> virtio_tx_region, tx_hdr);
> + desc->len = vq->hw->vtnet_hdr_size;
> +}
> +
> +static void
> +virtqueue_txq_indirect_header_init_split(struct virtqueue *vq, uint32_t
> idx)
> +{
> + struct virtio_tx_region *txr;
> + struct vring_desc *desc;
> + rte_iova_t hdr_mem;
> +
> + txr = vq->txq.hdr_mz->addr;
> + hdr_mem = vq->txq.hdr_mem;
> + desc = txr[idx].tx_indir;
> +
> + vring_desc_init_split(desc, RTE_DIM(txr[idx].tx_indir));
> + desc->addr = hdr_mem + idx * sizeof(*txr) + offsetof(struct
> virtio_tx_region, tx_hdr);
> + desc->len = vq->hw->vtnet_hdr_size;
> + desc->flags = VRING_DESC_F_NEXT;
> +}
> +
> +void
> +virtqueue_txq_indirect_headers_init(struct virtqueue *vq)
> +{
> + uint32_t i;
> +
> + if (!virtio_with_feature(vq->hw, VIRTIO_RING_F_INDIRECT_DESC))
> + return;
> +
> + for (i = 0; i < vq->vq_nentries; i++)
> + if (virtio_with_packed_queue(vq->hw))
> + virtqueue_txq_indirect_header_init_packed(vq, i);
> + else
> + virtqueue_txq_indirect_header_init_split(vq, i);
> +}
> +
> int
> virtqueue_rxvq_reset_packed(struct virtqueue *vq)
> {
> @@ -182,10 +230,7 @@ virtqueue_txvq_reset_packed(struct virtqueue *vq)
> {
> int size = vq->vq_nentries;
> struct vq_desc_extra *dxp;
> - struct virtnet_tx *txvq;
> uint16_t desc_idx;
> - struct virtio_tx_region *txr;
> - struct vring_packed_desc *start_dp;
>
> vq->vq_used_cons_idx = 0;
> vq->vq_desc_head_idx = 0;
> @@ -197,10 +242,8 @@ virtqueue_txvq_reset_packed(struct virtqueue *vq)
> vq->vq_packed.cached_flags = VRING_PACKED_DESC_F_AVAIL;
> vq->vq_packed.event_flags_shadow = 0;
>
> - txvq = &vq->txq;
> - txr = txvq->hdr_mz->addr;
> memset(vq->mz->addr, 0, vq->mz->len);
> - memset(txvq->hdr_mz->addr, 0, txvq->hdr_mz->len);
> + memset(vq->txq.hdr_mz->addr, 0, vq->txq.hdr_mz->len);
>
> for (desc_idx = 0; desc_idx < vq->vq_nentries; desc_idx++) {
> dxp = &vq->vq_descx[desc_idx];
> @@ -208,20 +251,11 @@ virtqueue_txvq_reset_packed(struct virtqueue *vq)
> rte_pktmbuf_free(dxp->cookie);
> dxp->cookie = NULL;
> }
> -
> - if (virtio_with_feature(vq->hw, VIRTIO_RING_F_INDIRECT_DESC))
> {
> - /* first indirect descriptor is always the tx header */
> - start_dp = txr[desc_idx].tx_packed_indir;
> - vring_desc_init_indirect_packed(start_dp,
> -
> RTE_DIM(txr[desc_idx].tx_packed_indir));
> - start_dp->addr = txvq->hdr_mem + desc_idx * sizeof(*txr)
> - + offsetof(struct virtio_tx_region, tx_hdr);
> - start_dp->len = vq->hw->vtnet_hdr_size;
> - }
> }
>
> + virtqueue_txq_indirect_headers_init(vq);
> vring_desc_init_packed(vq, size);
> -
> virtqueue_disable_intr(vq);
> +
> return 0;
> }
> diff --git a/drivers/net/virtio/virtqueue.h
> b/drivers/net/virtio/virtqueue.h
> index 8b7bfae643..d453c3ec26 100644
> --- a/drivers/net/virtio/virtqueue.h
> +++ b/drivers/net/virtio/virtqueue.h
> @@ -384,6 +384,8 @@ int virtqueue_rxvq_reset_packed(struct virtqueue *vq);
>
> int virtqueue_txvq_reset_packed(struct virtqueue *vq);
>
> +void virtqueue_txq_indirect_headers_init(struct virtqueue *vq);
> +
> static inline int
> virtqueue_full(const struct virtqueue *vq)
> {
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 11/21] net/virtio: extract virtqueue init from virtio queue init
2022-11-30 15:56 ` [PATCH v1 11/21] net/virtio: extract virtqueue init from virtio queue init Maxime Coquelin
@ 2023-01-30 7:53 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-30 7:53 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:56 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 11/21] net/virtio: extract virtqueue init from virtio
> queue init
>
> This patch extracts the virtqueue initialization out of
> the Virtio ethdev queue initialization, as preliminary
> work to provide a way for Virtio-user to allocate its
> shadow control virtqueue.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 261 ++--------------------------
> drivers/net/virtio/virtqueue.c | 266 +++++++++++++++++++++++++++++
> drivers/net/virtio/virtqueue.h | 5 +
> 3 files changed, 282 insertions(+), 250 deletions(-)
>
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 21/21] net/virtio-user: remove max queues limitation
2022-11-30 15:56 ` [PATCH v1 21/21] net/virtio-user: remove max queues limitation Maxime Coquelin
@ 2023-01-31 5:19 ` Xia, Chenbo
2023-02-07 14:14 ` Maxime Coquelin
0 siblings, 1 reply; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-31 5:19 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
Hi Maxime,
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:57 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 21/21] net/virtio-user: remove max queues limitation
>
> This patch removes the limitation of 8 queue pairs by
> dynamically allocating vring metadata once we know the
> maximum number of queue pairs supported by the backend.
>
> This is especially useful for Vhost-vDPA with physical
> devices, where the maximum queues supported may be much
> more than 8 pairs.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio.h | 6 -
> .../net/virtio/virtio_user/virtio_user_dev.c | 118 ++++++++++++++----
> .../net/virtio/virtio_user/virtio_user_dev.h | 16 +--
> drivers/net/virtio/virtio_user_ethdev.c | 17 +--
> 4 files changed, 109 insertions(+), 48 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio.h b/drivers/net/virtio/virtio.h
> index 5c8f71a44d..04a897bf51 100644
> --- a/drivers/net/virtio/virtio.h
> +++ b/drivers/net/virtio/virtio.h
> @@ -124,12 +124,6 @@
> VIRTIO_NET_HASH_TYPE_UDP_EX)
>
>
> -/*
> - * Maximum number of virtqueues per device.
> - */
> -#define VIRTIO_MAX_VIRTQUEUE_PAIRS 8
> -#define VIRTIO_MAX_VIRTQUEUES (VIRTIO_MAX_VIRTQUEUE_PAIRS * 2 + 1)
> -
> /* VirtIO device IDs. */
> #define VIRTIO_ID_NETWORK 0x01
> #define VIRTIO_ID_BLOCK 0x02
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 7c48c9bb29..aa24fdea70 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -17,6 +17,7 @@
> #include <rte_alarm.h>
> #include <rte_string_fns.h>
> #include <rte_eal_memconfig.h>
> +#include <rte_malloc.h>
>
> #include "vhost.h"
> #include "virtio_user_dev.h"
> @@ -58,8 +59,8 @@ virtio_user_kick_queue(struct virtio_user_dev *dev,
> uint32_t queue_sel)
> int ret;
> struct vhost_vring_file file;
> struct vhost_vring_state state;
> - struct vring *vring = &dev->vrings[queue_sel];
> - struct vring_packed *pq_vring = &dev->packed_vrings[queue_sel];
> + struct vring *vring = &dev->vrings.split[queue_sel];
> + struct vring_packed *pq_vring = &dev->vrings.packed[queue_sel];
> struct vhost_vring_addr addr = {
> .index = queue_sel,
> .log_guest_addr = 0,
> @@ -299,18 +300,6 @@ virtio_user_dev_init_max_queue_pairs(struct
> virtio_user_dev *dev, uint32_t user_
> return ret;
> }
>
> - if (dev->max_queue_pairs > VIRTIO_MAX_VIRTQUEUE_PAIRS) {
> - /*
> - * If the device supports control queue, the control queue
> - * index is max_virtqueue_pairs * 2. Disable MQ if it happens.
> - */
> - PMD_DRV_LOG(ERR, "(%s) Device advertises too many queues (%u,
> max supported %u)",
> - dev->path, dev->max_queue_pairs,
> VIRTIO_MAX_VIRTQUEUE_PAIRS);
> - dev->max_queue_pairs = 1;
> -
> - return -1;
> - }
> -
> return 0;
> }
>
> @@ -579,6 +568,86 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
> return 0;
> }
>
> +static int
> +virtio_user_alloc_vrings(struct virtio_user_dev *dev)
> +{
> + int i, size, nr_vrings;
> +
> + nr_vrings = dev->max_queue_pairs * 2;
> + if (dev->hw_cvq)
> + nr_vrings++;
> +
> + dev->callfds = rte_zmalloc("virtio_user_dev", nr_vrings *
> sizeof(*dev->callfds), 0);
> + if (!dev->callfds) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc callfds", dev->path);
> + return -1;
> + }
> +
> + dev->kickfds = rte_zmalloc("virtio_user_dev", nr_vrings *
> sizeof(*dev->kickfds), 0);
> + if (!dev->kickfds) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc kickfds", dev->path);
> + goto free_callfds;
> + }
> +
> + for (i = 0; i < nr_vrings; i++) {
> + dev->callfds[i] = -1;
> + dev->kickfds[i] = -1;
> + }
> +
> + size = RTE_MAX(sizeof(*dev->vrings.split), sizeof(*dev-
> >vrings.packed));
> + dev->vrings.ptr = rte_zmalloc("virtio_user_dev", nr_vrings * size,
> 0);
> + if (!dev->vrings.ptr) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc vrings metadata", dev-
> >path);
> + goto free_kickfds;
> + }
> +
> + dev->packed_queues = rte_zmalloc("virtio_user_dev",
> + nr_vrings * sizeof(*dev->packed_queues), 0);
Should we pass the info of packed vq or not to save the alloc of
dev->packed_queues, also to know correct size of dev->vrings.ptr.
Thanks,
Chenbo
> + if (!dev->packed_queues) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc packed queues
> metadata", dev->path);
> + goto free_vrings;
> + }
> +
> + dev->qp_enabled = rte_zmalloc("virtio_user_dev",
> + dev->max_queue_pairs * sizeof(*dev->qp_enabled), 0);
> + if (!dev->qp_enabled) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc QP enable states",
> dev->path);
> + goto free_packed_queues;
> + }
> +
> + return 0;
> +
> +free_packed_queues:
> + rte_free(dev->packed_queues);
> + dev->packed_queues = NULL;
> +free_vrings:
> + rte_free(dev->vrings.ptr);
> + dev->vrings.ptr = NULL;
> +free_kickfds:
> + rte_free(dev->kickfds);
> + dev->kickfds = NULL;
> +free_callfds:
> + rte_free(dev->callfds);
> + dev->callfds = NULL;
> +
> + return -1;
> +}
> +
> +static void
> +virtio_user_free_vrings(struct virtio_user_dev *dev)
> +{
> + rte_free(dev->qp_enabled);
> + dev->qp_enabled = NULL;
> + rte_free(dev->packed_queues);
> + dev->packed_queues = NULL;
> + rte_free(dev->vrings.ptr);
> + dev->vrings.ptr = NULL;
> + rte_free(dev->kickfds);
> + dev->kickfds = NULL;
> + rte_free(dev->callfds);
> + dev->callfds = NULL;
> +}
> +
> /* Use below macro to filter features from vhost backend */
> #define VIRTIO_USER_SUPPORTED_FEATURES \
> (1ULL << VIRTIO_NET_F_MAC | \
> @@ -607,16 +676,10 @@ virtio_user_dev_init(struct virtio_user_dev *dev,
> char *path, uint16_t queues,
> enum virtio_user_backend_type backend_type)
> {
> uint64_t backend_features;
> - int i;
>
> pthread_mutex_init(&dev->mutex, NULL);
> strlcpy(dev->path, path, PATH_MAX);
>
> - for (i = 0; i < VIRTIO_MAX_VIRTQUEUES; i++) {
> - dev->kickfds[i] = -1;
> - dev->callfds[i] = -1;
> - }
> -
> dev->started = 0;
> dev->queue_pairs = 1; /* mq disabled by default */
> dev->queue_size = queue_size;
> @@ -661,9 +724,14 @@ virtio_user_dev_init(struct virtio_user_dev *dev,
> char *path, uint16_t queues,
> if (dev->max_queue_pairs > 1)
> cq = 1;
>
> + if (virtio_user_alloc_vrings(dev) < 0) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to allocate vring metadata",
> dev->path);
> + goto destroy;
> + }
> +
> if (virtio_user_dev_init_notify(dev) < 0) {
> PMD_INIT_LOG(ERR, "(%s) Failed to init notifiers", dev->path);
> - goto destroy;
> + goto free_vrings;
> }
>
> if (virtio_user_fill_intr_handle(dev) < 0) {
> @@ -722,6 +790,8 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char
> *path, uint16_t queues,
>
> notify_uninit:
> virtio_user_dev_uninit_notify(dev);
> +free_vrings:
> + virtio_user_free_vrings(dev);
> destroy:
> dev->ops->destroy(dev);
>
> @@ -742,6 +812,8 @@ virtio_user_dev_uninit(struct virtio_user_dev *dev)
>
> virtio_user_dev_uninit_notify(dev);
>
> + virtio_user_free_vrings(dev);
> +
> free(dev->ifname);
>
> if (dev->is_server)
> @@ -897,7 +969,7 @@ static void
> virtio_user_handle_cq_packed(struct virtio_user_dev *dev, uint16_t
> queue_idx)
> {
> struct virtio_user_queue *vq = &dev->packed_queues[queue_idx];
> - struct vring_packed *vring = &dev->packed_vrings[queue_idx];
> + struct vring_packed *vring = &dev->vrings.packed[queue_idx];
> uint16_t n_descs, flags;
>
> /* Perform a load-acquire barrier in desc_is_avail to
> @@ -931,7 +1003,7 @@ virtio_user_handle_cq_split(struct virtio_user_dev
> *dev, uint16_t queue_idx)
> uint16_t avail_idx, desc_idx;
> struct vring_used_elem *uep;
> uint32_t n_descs;
> - struct vring *vring = &dev->vrings[queue_idx];
> + struct vring *vring = &dev->vrings.split[queue_idx];
>
> /* Consume avail ring, using used ring idx as first one */
> while (__atomic_load_n(&vring->used->idx, __ATOMIC_RELAXED)
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h
> b/drivers/net/virtio/virtio_user/virtio_user_dev.h
> index e8753f6019..7323d88302 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
> @@ -29,8 +29,8 @@ struct virtio_user_dev {
> enum virtio_user_backend_type backend_type;
> bool is_server; /* server or client mode */
>
> - int callfds[VIRTIO_MAX_VIRTQUEUES];
> - int kickfds[VIRTIO_MAX_VIRTQUEUES];
> + int *callfds;
> + int *kickfds;
> int mac_specified;
> uint16_t max_queue_pairs;
> uint16_t queue_pairs;
> @@ -48,11 +48,13 @@ struct virtio_user_dev {
> char *ifname;
>
> union {
> - struct vring vrings[VIRTIO_MAX_VIRTQUEUES];
> - struct vring_packed packed_vrings[VIRTIO_MAX_VIRTQUEUES];
> - };
> - struct virtio_user_queue packed_queues[VIRTIO_MAX_VIRTQUEUES];
> - bool qp_enabled[VIRTIO_MAX_VIRTQUEUE_PAIRS];
> + void *ptr;
> + struct vring *split;
> + struct vring_packed *packed;
> + } vrings;
> +
> + struct virtio_user_queue *packed_queues;
> + bool *qp_enabled;
>
> struct virtio_user_backend_ops *ops;
> pthread_mutex_t mutex;
> diff --git a/drivers/net/virtio/virtio_user_ethdev.c
> b/drivers/net/virtio/virtio_user_ethdev.c
> index d23959e836..b1fc4d5d30 100644
> --- a/drivers/net/virtio/virtio_user_ethdev.c
> +++ b/drivers/net/virtio/virtio_user_ethdev.c
> @@ -186,7 +186,7 @@ virtio_user_setup_queue_packed(struct virtqueue *vq,
> uint64_t used_addr;
> uint16_t i;
>
> - vring = &dev->packed_vrings[queue_idx];
> + vring = &dev->vrings.packed[queue_idx];
> desc_addr = (uintptr_t)vq->vq_ring_virt_mem;
> avail_addr = desc_addr + vq->vq_nentries *
> sizeof(struct vring_packed_desc);
> @@ -216,10 +216,10 @@ virtio_user_setup_queue_split(struct virtqueue *vq,
> struct virtio_user_dev *dev)
> ring[vq->vq_nentries]),
> VIRTIO_VRING_ALIGN);
>
> - dev->vrings[queue_idx].num = vq->vq_nentries;
> - dev->vrings[queue_idx].desc = (void *)(uintptr_t)desc_addr;
> - dev->vrings[queue_idx].avail = (void *)(uintptr_t)avail_addr;
> - dev->vrings[queue_idx].used = (void *)(uintptr_t)used_addr;
> + dev->vrings.split[queue_idx].num = vq->vq_nentries;
> + dev->vrings.split[queue_idx].desc = (void *)(uintptr_t)desc_addr;
> + dev->vrings.split[queue_idx].avail = (void *)(uintptr_t)avail_addr;
> + dev->vrings.split[queue_idx].used = (void *)(uintptr_t)used_addr;
> }
>
> static int
> @@ -619,13 +619,6 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
> }
> }
>
> - if (queues > VIRTIO_MAX_VIRTQUEUE_PAIRS) {
> - PMD_INIT_LOG(ERR, "arg %s %" PRIu64 " exceeds the limit %u",
> - VIRTIO_USER_ARG_QUEUES_NUM, queues,
> - VIRTIO_MAX_VIRTQUEUE_PAIRS);
> - goto end;
> - }
> -
> if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_MRG_RXBUF) == 1) {
> if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_MRG_RXBUF,
> &get_integer_arg, &mrg_rxbuf) < 0) {
> --
> 2.38.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 12/21] net/virtio-user: fix device starting failure handling
2022-11-30 15:56 ` [PATCH v1 12/21] net/virtio-user: fix device starting failure handling Maxime Coquelin
@ 2023-01-31 5:20 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-31 5:20 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma; +Cc: stable
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:57 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>; stable@dpdk.org
> Subject: [PATCH v1 12/21] net/virtio-user: fix device starting failure
> handling
>
> If the device fails to start, read the status from the
> device and return early.
>
> Fixes: 57912824615f ("net/virtio-user: support vhost status setting")
> Cc: stable@dpdk.org
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_user_ethdev.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_user_ethdev.c
> b/drivers/net/virtio/virtio_user_ethdev.c
> index d32abec327..78b1ed9ace 100644
> --- a/drivers/net/virtio/virtio_user_ethdev.c
> +++ b/drivers/net/virtio/virtio_user_ethdev.c
> @@ -90,10 +90,15 @@ virtio_user_set_status(struct virtio_hw *hw, uint8_t
> status)
> if (status & VIRTIO_CONFIG_STATUS_FEATURES_OK &&
> ~old_status & VIRTIO_CONFIG_STATUS_FEATURES_OK)
> virtio_user_dev_set_features(dev);
> - if (status & VIRTIO_CONFIG_STATUS_DRIVER_OK)
> - virtio_user_start_device(dev);
> - else if (status == VIRTIO_CONFIG_STATUS_RESET)
> +
> + if (status & VIRTIO_CONFIG_STATUS_DRIVER_OK) {
> + if (virtio_user_start_device(dev)) {
> + virtio_user_dev_update_status(dev);
> + return;
> + }
> + } else if (status == VIRTIO_CONFIG_STATUS_RESET) {
> virtio_user_reset(hw);
> + }
>
> virtio_user_dev_set_status(dev, status);
> }
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 13/21] net/virtio-user: simplify queues setup
2022-11-30 15:56 ` [PATCH v1 13/21] net/virtio-user: simplify queues setup Maxime Coquelin
@ 2023-01-31 5:21 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-31 5:21 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:57 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 13/21] net/virtio-user: simplify queues setup
>
> The only reason two loops were needed to iterate over
> queues at setup time was to be able to print whether it
> was a Tx or Rx queue.
>
> This patch changes queues iteration to a single loop.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_user/virtio_user_dev.c | 16 ++++------------
> 1 file changed, 4 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 19599aa3f6..873c6aa036 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -118,19 +118,11 @@ static int
> virtio_user_queue_setup(struct virtio_user_dev *dev,
> int (*fn)(struct virtio_user_dev *, uint32_t))
> {
> - uint32_t i, queue_sel;
> + uint32_t i;
>
> - for (i = 0; i < dev->max_queue_pairs; ++i) {
> - queue_sel = 2 * i + VTNET_SQ_RQ_QUEUE_IDX;
> - if (fn(dev, queue_sel) < 0) {
> - PMD_DRV_LOG(ERR, "(%s) setup rx vq %u failed", dev->path,
> i);
> - return -1;
> - }
> - }
> - for (i = 0; i < dev->max_queue_pairs; ++i) {
> - queue_sel = 2 * i + VTNET_SQ_TQ_QUEUE_IDX;
> - if (fn(dev, queue_sel) < 0) {
> - PMD_DRV_LOG(INFO, "(%s) setup tx vq %u failed", dev-
> >path, i);
> + for (i = 0; i < dev->max_queue_pairs * 2; ++i) {
> + if (fn(dev, i) < 0) {
> + PMD_DRV_LOG(ERR, "(%s) setup VQ %u failed", dev->path,
> i);
> return -1;
> }
> }
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 14/21] net/virtio-user: use proper type for number of queue pairs
2022-11-30 15:56 ` [PATCH v1 14/21] net/virtio-user: use proper type for number of queue pairs Maxime Coquelin
@ 2023-01-31 5:21 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-31 5:21 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:57 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 14/21] net/virtio-user: use proper type for number of
> queue pairs
>
> The number of queue pairs is specified as a 16 bits
> unsigned int in the Virtio specification.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_user/virtio_user_dev.c | 2 +-
> drivers/net/virtio/virtio_user/virtio_user_dev.h | 6 +++---
> drivers/net/virtio/virtio_user_ethdev.c | 2 +-
> 3 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 873c6aa036..809c9ef442 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -553,7 +553,7 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
> 1ULL << VIRTIO_F_RING_PACKED)
>
> int
> -virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
> +virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t
> queues,
> int cq, int queue_size, const char *mac, char **ifname,
> int server, int mrg_rxbuf, int in_order, int packed_vq,
> enum virtio_user_backend_type backend_type)
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h
> b/drivers/net/virtio/virtio_user/virtio_user_dev.h
> index 819f6463ba..3c5453eac0 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
> @@ -32,8 +32,8 @@ struct virtio_user_dev {
> int callfds[VIRTIO_MAX_VIRTQUEUES];
> int kickfds[VIRTIO_MAX_VIRTQUEUES];
> int mac_specified;
> - uint32_t max_queue_pairs;
> - uint32_t queue_pairs;
> + uint16_t max_queue_pairs;
> + uint16_t queue_pairs;
> uint32_t queue_size;
> uint64_t features; /* the negotiated features with driver,
> * and will be sync with device
> @@ -64,7 +64,7 @@ struct virtio_user_dev {
> int virtio_user_dev_set_features(struct virtio_user_dev *dev);
> int virtio_user_start_device(struct virtio_user_dev *dev);
> int virtio_user_stop_device(struct virtio_user_dev *dev);
> -int virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int
> queues,
> +int virtio_user_dev_init(struct virtio_user_dev *dev, char *path,
> uint16_t queues,
> int cq, int queue_size, const char *mac, char **ifname,
> int server, int mrg_rxbuf, int in_order,
> int packed_vq,
> diff --git a/drivers/net/virtio/virtio_user_ethdev.c
> b/drivers/net/virtio/virtio_user_ethdev.c
> index 78b1ed9ace..6ad5896378 100644
> --- a/drivers/net/virtio/virtio_user_ethdev.c
> +++ b/drivers/net/virtio/virtio_user_ethdev.c
> @@ -655,7 +655,7 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
>
> dev = eth_dev->data->dev_private;
> hw = &dev->hw;
> - if (virtio_user_dev_init(dev, path, queues, cq,
> + if (virtio_user_dev_init(dev, path, (uint16_t)queues, cq,
> queue_size, mac_addr, &ifname, server_mode,
> mrg_rxbuf, in_order, packed_vq, backend_type) < 0) {
> PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 15/21] net/virtio-user: get max number of queue pairs from device
2022-11-30 15:56 ` [PATCH v1 15/21] net/virtio-user: get max number of queue pairs from device Maxime Coquelin
@ 2023-01-31 5:21 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-31 5:21 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:57 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 15/21] net/virtio-user: get max number of queue pairs
> from device
>
> When supported by the backend (only vDPA for now), this
> patch gets the maximum number of queue pairs supported by
> the device by querying it in its config space.
>
> This is required for adding backend control queue support,
> as is index equals the maximum number of queues supported
> by the device as described by the Virtio specification.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> .../net/virtio/virtio_user/virtio_user_dev.c | 93 ++++++++++++++-----
> drivers/net/virtio/virtio_user_ethdev.c | 7 --
> 2 files changed, 71 insertions(+), 29 deletions(-)
>
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 16/21] net/virtio-user: allocate shadow control queue
2022-11-30 15:56 ` [PATCH v1 16/21] net/virtio-user: allocate shadow control queue Maxime Coquelin
@ 2023-01-31 5:21 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-31 5:21 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:57 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 16/21] net/virtio-user: allocate shadow control queue
>
> If the backends supports control virtqueue, allocate a
> shadow control virtqueue, and implement the notify callback
> that writes into the kickfd.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> .../net/virtio/virtio_user/virtio_user_dev.c | 47 ++++++++++++++++++-
> .../net/virtio/virtio_user/virtio_user_dev.h | 5 ++
> drivers/net/virtio/virtio_user_ethdev.c | 6 +++
> 3 files changed, 56 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index a3584e7735..16a0e07413 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -146,8 +146,9 @@ virtio_user_dev_set_features(struct virtio_user_dev
> *dev)
>
> /* Strip VIRTIO_NET_F_MAC, as MAC address is handled in vdev init */
> features &= ~(1ull << VIRTIO_NET_F_MAC);
> - /* Strip VIRTIO_NET_F_CTRL_VQ, as devices do not really need to know
> */
> - features &= ~(1ull << VIRTIO_NET_F_CTRL_VQ);
> + /* Strip VIRTIO_NET_F_CTRL_VQ if the devices does not really support
> control VQ */
> + if (!dev->hw_cvq)
> + features &= ~(1ull << VIRTIO_NET_F_CTRL_VQ);
> features &= ~(1ull << VIRTIO_NET_F_STATUS);
> ret = dev->ops->set_features(dev, features);
> if (ret < 0)
> @@ -911,6 +912,48 @@ virtio_user_handle_cq(struct virtio_user_dev *dev,
> uint16_t queue_idx)
> }
> }
>
> +static void
> +virtio_user_control_queue_notify(struct virtqueue *vq, void *cookie)
> +{
> + struct virtio_user_dev *dev = cookie;
> + uint64_t buf = 1;
> +
> + if (write(dev->kickfds[vq->vq_queue_index], &buf, sizeof(buf)) < 0)
> + PMD_DRV_LOG(ERR, "failed to kick backend: %s",
> + strerror(errno));
> +}
> +
> +int
> +virtio_user_dev_create_shadow_cvq(struct virtio_user_dev *dev, struct
> virtqueue *vq)
> +{
> + char name[VIRTQUEUE_MAX_NAME_SZ];
> + struct virtqueue *scvq;
> +
> + snprintf(name, sizeof(name), "port%d_shadow_cvq", vq->hw->port_id);
> + scvq = virtqueue_alloc(&dev->hw, vq->vq_queue_index, vq->vq_nentries,
> + VTNET_CQ, SOCKET_ID_ANY, name);
> + if (!scvq) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc shadow control vq\n",
> dev->path);
> + return -ENOMEM;
> + }
> +
> + scvq->cq.notify_queue = &virtio_user_control_queue_notify;
> + scvq->cq.notify_cookie = dev;
> + dev->scvq = scvq;
> +
> + return 0;
> +}
> +
> +void
> +virtio_user_dev_destroy_shadow_cvq(struct virtio_user_dev *dev)
> +{
> + if (!dev->scvq)
> + return;
> +
> + virtqueue_free(dev->scvq);
> + dev->scvq = NULL;
> +}
> +
> int
> virtio_user_dev_set_status(struct virtio_user_dev *dev, uint8_t status)
> {
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h
> b/drivers/net/virtio/virtio_user/virtio_user_dev.h
> index 3c5453eac0..e0db4faf3f 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
> @@ -58,6 +58,9 @@ struct virtio_user_dev {
> pthread_mutex_t mutex;
> bool started;
>
> + bool hw_cvq;
> + struct virtqueue *scvq;
> +
> void *backend_data;
> };
>
> @@ -74,6 +77,8 @@ void virtio_user_handle_cq(struct virtio_user_dev *dev,
> uint16_t queue_idx);
> void virtio_user_handle_cq_packed(struct virtio_user_dev *dev,
> uint16_t queue_idx);
> uint8_t virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t
> q_pairs);
> +int virtio_user_dev_create_shadow_cvq(struct virtio_user_dev *dev, struct
> virtqueue *vq);
> +void virtio_user_dev_destroy_shadow_cvq(struct virtio_user_dev *dev);
> int virtio_user_dev_set_status(struct virtio_user_dev *dev, uint8_t
> status);
> int virtio_user_dev_update_status(struct virtio_user_dev *dev);
> int virtio_user_dev_update_link_state(struct virtio_user_dev *dev);
> diff --git a/drivers/net/virtio/virtio_user_ethdev.c
> b/drivers/net/virtio/virtio_user_ethdev.c
> index 6c3e875793..626bd95b62 100644
> --- a/drivers/net/virtio/virtio_user_ethdev.c
> +++ b/drivers/net/virtio/virtio_user_ethdev.c
> @@ -232,6 +232,9 @@ virtio_user_setup_queue(struct virtio_hw *hw, struct
> virtqueue *vq)
> else
> virtio_user_setup_queue_split(vq, dev);
>
> + if (dev->hw_cvq && hw->cvq && (virtnet_cq_to_vq(hw->cvq) == vq))
> + return virtio_user_dev_create_shadow_cvq(dev, vq);
> +
> return 0;
> }
>
> @@ -251,6 +254,9 @@ virtio_user_del_queue(struct virtio_hw *hw, struct
> virtqueue *vq)
>
> close(dev->callfds[vq->vq_queue_index]);
> close(dev->kickfds[vq->vq_queue_index]);
> +
> + if (hw->cvq && (virtnet_cq_to_vq(hw->cvq) == vq) && dev->scvq)
> + virtio_user_dev_destroy_shadow_cvq(dev);
> }
>
> static void
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 17/21] net/virtio-user: send shadow virtqueue info to the backend
2022-11-30 15:56 ` [PATCH v1 17/21] net/virtio-user: send shadow virtqueue info to the backend Maxime Coquelin
@ 2023-01-31 5:22 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-31 5:22 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:57 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 17/21] net/virtio-user: send shadow virtqueue info to
> the backend
>
> This patch adds sending the shadow control queue info
> to the backend.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> .../net/virtio/virtio_user/virtio_user_dev.c | 28 ++++++++++++++++---
> 1 file changed, 24 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 16a0e07413..1a5386a3f6 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -66,6 +66,18 @@ virtio_user_kick_queue(struct virtio_user_dev *dev,
> uint32_t queue_sel)
> .flags = 0, /* disable log */
> };
>
> + if (queue_sel == dev->max_queue_pairs * 2) {
> + if (!dev->scvq) {
> + PMD_INIT_LOG(ERR, "(%s) Shadow control queue expected
> but missing",
> + dev->path);
> + goto err;
> + }
> +
> + /* Use shadow control queue information */
> + vring = &dev->scvq->vq_split.ring;
> + pq_vring = &dev->scvq->vq_packed.ring;
> + }
> +
> if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) {
> addr.desc_user_addr =
> (uint64_t)(uintptr_t)pq_vring->desc;
> @@ -118,9 +130,13 @@ static int
> virtio_user_queue_setup(struct virtio_user_dev *dev,
> int (*fn)(struct virtio_user_dev *, uint32_t))
> {
> - uint32_t i;
> + uint32_t i, nr_vq;
>
> - for (i = 0; i < dev->max_queue_pairs * 2; ++i) {
> + nr_vq = dev->max_queue_pairs * 2;
> + if (dev->hw_cvq)
> + nr_vq++;
> +
> + for (i = 0; i < nr_vq; i++) {
> if (fn(dev, i) < 0) {
> PMD_DRV_LOG(ERR, "(%s) setup VQ %u failed", dev->path,
> i);
> return -1;
> @@ -381,11 +397,15 @@ virtio_user_dev_init_mac(struct virtio_user_dev *dev,
> const char *mac)
> static int
> virtio_user_dev_init_notify(struct virtio_user_dev *dev)
> {
> - uint32_t i, j;
> + uint32_t i, j, nr_vq;
> int callfd;
> int kickfd;
>
> - for (i = 0; i < dev->max_queue_pairs * 2; i++) {
> + nr_vq = dev->max_queue_pairs * 2;
> + if (dev->hw_cvq)
> + nr_vq++;
> +
> + for (i = 0; i < nr_vq; i++) {
> /* May use invalid flag, but some backend uses kickfd and
> * callfd as criteria to judge if dev is alive. so finally we
> * use real event_fd.
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 18/21] net/virtio-user: add new callback to enable control queue
2022-11-30 15:56 ` [PATCH v1 18/21] net/virtio-user: add new callback to enable control queue Maxime Coquelin
@ 2023-01-31 5:22 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-31 5:22 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:57 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 18/21] net/virtio-user: add new callback to enable
> control queue
>
> This patch introduces a new callback that is to be called
> when the backend supports control virtqueue.
>
> Implementation for Vhost-vDPA backend is added in this patch.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_user/vhost.h | 1 +
> drivers/net/virtio/virtio_user/vhost_vdpa.c | 15 +++++++++++++++
> drivers/net/virtio/virtio_user/virtio_user_dev.c | 3 +++
> 3 files changed, 19 insertions(+)
>
> diff --git a/drivers/net/virtio/virtio_user/vhost.h
> b/drivers/net/virtio/virtio_user/vhost.h
> index dfbf6be033..f817cab77a 100644
> --- a/drivers/net/virtio/virtio_user/vhost.h
> +++ b/drivers/net/virtio/virtio_user/vhost.h
> @@ -82,6 +82,7 @@ struct virtio_user_backend_ops {
> int (*get_config)(struct virtio_user_dev *dev, uint8_t *data,
> uint32_t off, uint32_t len);
> int (*set_config)(struct virtio_user_dev *dev, const uint8_t *data,
> uint32_t off,
> uint32_t len);
> + int (*cvq_enable)(struct virtio_user_dev *dev, int enable);
> int (*enable_qp)(struct virtio_user_dev *dev, uint16_t pair_idx, int
> enable);
> int (*dma_map)(struct virtio_user_dev *dev, void *addr, uint64_t
> iova, size_t len);
> int (*dma_unmap)(struct virtio_user_dev *dev, void *addr, uint64_t
> iova, size_t len);
> diff --git a/drivers/net/virtio/virtio_user/vhost_vdpa.c
> b/drivers/net/virtio/virtio_user/vhost_vdpa.c
> index a0897f8dd1..3fd13d9fac 100644
> --- a/drivers/net/virtio/virtio_user/vhost_vdpa.c
> +++ b/drivers/net/virtio/virtio_user/vhost_vdpa.c
> @@ -564,6 +564,20 @@ vhost_vdpa_destroy(struct virtio_user_dev *dev)
> return 0;
> }
>
> +static int
> +vhost_vdpa_cvq_enable(struct virtio_user_dev *dev, int enable)
> +{
> + struct vhost_vring_state state = {
> + .index = dev->max_queue_pairs * 2,
> + .num = enable,
> + };
> +
> + if (vhost_vdpa_set_vring_enable(dev, &state))
> + return -1;
> +
> + return 0;
> +}
> +
> static int
> vhost_vdpa_enable_queue_pair(struct virtio_user_dev *dev,
> uint16_t pair_idx,
> @@ -629,6 +643,7 @@ struct virtio_user_backend_ops virtio_ops_vdpa = {
> .set_status = vhost_vdpa_set_status,
> .get_config = vhost_vdpa_get_config,
> .set_config = vhost_vdpa_set_config,
> + .cvq_enable = vhost_vdpa_cvq_enable,
> .enable_qp = vhost_vdpa_enable_queue_pair,
> .dma_map = vhost_vdpa_dma_map_batch,
> .dma_unmap = vhost_vdpa_dma_unmap_batch,
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 1a5386a3f6..b0d603ee12 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -767,6 +767,9 @@ virtio_user_handle_mq(struct virtio_user_dev *dev,
> uint16_t q_pairs)
> for (i = q_pairs; i < dev->max_queue_pairs; ++i)
> ret |= dev->ops->enable_qp(dev, i, 0);
>
> + if (dev->scvq)
> + ret |= dev->ops->cvq_enable(dev, 1);
> +
> dev->queue_pairs = q_pairs;
>
> return ret;
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* RE: [PATCH v1 20/21] net/virtio-user: advertize control VQ support with vDPA
2022-11-30 15:56 ` [PATCH v1 20/21] net/virtio-user: advertize control VQ support with vDPA Maxime Coquelin
@ 2023-01-31 5:24 ` Xia, Chenbo
0 siblings, 0 replies; 48+ messages in thread
From: Xia, Chenbo @ 2023-01-31 5:24 UTC (permalink / raw)
To: Coquelin, Maxime, dev, david.marchand, eperezma
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, November 30, 2022 11:57 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; eperezma@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v1 20/21] net/virtio-user: advertize control VQ support
> with vDPA
>
> This patch advertizes control virtqueue support by the vDPA
> backend if it supports VIRTIO_NET_F_CTRL_VQ.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/virtio/virtio_user/vhost_vdpa.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_user/vhost_vdpa.c
> b/drivers/net/virtio/virtio_user/vhost_vdpa.c
> index 3fd13d9fac..7bb4995893 100644
> --- a/drivers/net/virtio/virtio_user/vhost_vdpa.c
> +++ b/drivers/net/virtio/virtio_user/vhost_vdpa.c
> @@ -135,8 +135,8 @@ vhost_vdpa_get_features(struct virtio_user_dev *dev,
> uint64_t *features)
> return -1;
> }
>
> - /* Multiqueue not supported for now */
> - *features &= ~(1ULL << VIRTIO_NET_F_MQ);
> + if (*features & 1ULL << VIRTIO_NET_F_CTRL_VQ)
> + dev->hw_cvq = true;
>
> /* Negotiated vDPA backend features */
> ret = vhost_vdpa_get_protocol_features(dev, &data-
> >protocol_features);
> --
> 2.38.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA
2023-01-30 5:57 ` [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Xia, Chenbo
@ 2023-02-07 10:08 ` Maxime Coquelin
0 siblings, 0 replies; 48+ messages in thread
From: Maxime Coquelin @ 2023-02-07 10:08 UTC (permalink / raw)
To: Xia, Chenbo; +Cc: dev, david.marchand, eperezma
On 1/30/23 06:57, Xia, Chenbo wrote:
> Hi Maxime,
>
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Sent: Wednesday, November 30, 2022 11:56 PM
>> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
>> david.marchand@redhat.com; eperezma@redhat.com
>> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Subject: [PATCH v1 00/21] Add control queue & MQ support to Virtio-user
>> vDPA
>>
>> --
>> 2.38.1
>
> I see there is one virtio test failed on patchwork, could you check if
> it's related?
The logs are missing, so hard to tell. But the same test_virtio_loopback
test is passing with "XL710" setup, whereas the NIC should not be
involved in it.
http://mails.dpdk.org/archives/test-report/2022-November/327608.html
Maxime
>
> Thanks,
> Chenbo
>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v1 10/21] net/virtio: alloc Rx SW ring only if vectorized path
2023-01-30 7:49 ` Xia, Chenbo
@ 2023-02-07 10:12 ` Maxime Coquelin
0 siblings, 0 replies; 48+ messages in thread
From: Maxime Coquelin @ 2023-02-07 10:12 UTC (permalink / raw)
To: Xia, Chenbo, dev, david.marchand, eperezma
On 1/30/23 08:49, Xia, Chenbo wrote:
> Hi Maxime,
>
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Sent: Wednesday, November 30, 2022 11:56 PM
>> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
>> david.marchand@redhat.com; eperezma@redhat.com
>> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Subject: [PATCH v1 10/21] net/virtio: alloc Rx SW ring only if vectorized
>> path
>>
>> This patch only allocates the SW ring when vectorized
>> datapath is used. It also moves the SW ring and fake mbuf
>> in the virtnet_rx struct since this is Rx-only.
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>> drivers/net/virtio/virtio_ethdev.c | 88 ++++++++++++-------
>> drivers/net/virtio/virtio_rxtx.c | 8 +-
>> drivers/net/virtio/virtio_rxtx.h | 4 +-
>> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
>> .../net/virtio/virtio_rxtx_simple_altivec.c | 4 +-
>> drivers/net/virtio/virtio_rxtx_simple_neon.c | 4 +-
>> drivers/net/virtio/virtio_rxtx_simple_sse.c | 4 +-
>> drivers/net/virtio/virtqueue.c | 6 +-
>> drivers/net/virtio/virtqueue.h | 1 -
>> 9 files changed, 72 insertions(+), 49 deletions(-)
>>
>> --- a/drivers/net/virtio/virtio_rxtx_simple_altivec.c
>> +++ b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
>> @@ -103,8 +103,8 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf
>> **rx_pkts,
>>
>> desc_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
>> rused = &vq->vq_split.ring.used->ring[desc_idx];
>> - sw_ring = &vq->sw_ring[desc_idx];
>> - sw_ring_end = &vq->sw_ring[vq->vq_nentries];
>> + sw_ring = &vq->rxq.sw_ring[desc_idx];
>
> After sw_ring, there are two spaces, should be only one.
Right, it was here before but I fixed it in v2 here and elsewhere.
Thanks,
Maxime
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v1 21/21] net/virtio-user: remove max queues limitation
2023-01-31 5:19 ` Xia, Chenbo
@ 2023-02-07 14:14 ` Maxime Coquelin
0 siblings, 0 replies; 48+ messages in thread
From: Maxime Coquelin @ 2023-02-07 14:14 UTC (permalink / raw)
To: Xia, Chenbo, dev, david.marchand, eperezma
On 1/31/23 06:19, Xia, Chenbo wrote:
> Hi Maxime,
>
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Sent: Wednesday, November 30, 2022 11:57 PM
>> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
>> david.marchand@redhat.com; eperezma@redhat.com
>> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Subject: [PATCH v1 21/21] net/virtio-user: remove max queues limitation
>>
>> This patch removes the limitation of 8 queue pairs by
>> dynamically allocating vring metadata once we know the
>> maximum number of queue pairs supported by the backend.
>>
>> This is especially useful for Vhost-vDPA with physical
>> devices, where the maximum queues supported may be much
>> more than 8 pairs.
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>> drivers/net/virtio/virtio.h | 6 -
>> .../net/virtio/virtio_user/virtio_user_dev.c | 118 ++++++++++++++----
>> .../net/virtio/virtio_user/virtio_user_dev.h | 16 +--
>> drivers/net/virtio/virtio_user_ethdev.c | 17 +--
>> 4 files changed, 109 insertions(+), 48 deletions(-)
>>
>> diff --git a/drivers/net/virtio/virtio.h b/drivers/net/virtio/virtio.h
>> index 5c8f71a44d..04a897bf51 100644
>> --- a/drivers/net/virtio/virtio.h
>> +++ b/drivers/net/virtio/virtio.h
>> @@ -124,12 +124,6 @@
>> VIRTIO_NET_HASH_TYPE_UDP_EX)
>>
>>
>> -/*
>> - * Maximum number of virtqueues per device.
>> - */
>> -#define VIRTIO_MAX_VIRTQUEUE_PAIRS 8
>> -#define VIRTIO_MAX_VIRTQUEUES (VIRTIO_MAX_VIRTQUEUE_PAIRS * 2 + 1)
>> -
>> /* VirtIO device IDs. */
>> #define VIRTIO_ID_NETWORK 0x01
>> #define VIRTIO_ID_BLOCK 0x02
>> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c
>> b/drivers/net/virtio/virtio_user/virtio_user_dev.c
>> index 7c48c9bb29..aa24fdea70 100644
>> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
>> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
>> @@ -17,6 +17,7 @@
>> #include <rte_alarm.h>
>> #include <rte_string_fns.h>
>> #include <rte_eal_memconfig.h>
>> +#include <rte_malloc.h>
>>
>> #include "vhost.h"
>> #include "virtio_user_dev.h"
>> @@ -58,8 +59,8 @@ virtio_user_kick_queue(struct virtio_user_dev *dev,
>> uint32_t queue_sel)
>> int ret;
>> struct vhost_vring_file file;
>> struct vhost_vring_state state;
>> - struct vring *vring = &dev->vrings[queue_sel];
>> - struct vring_packed *pq_vring = &dev->packed_vrings[queue_sel];
>> + struct vring *vring = &dev->vrings.split[queue_sel];
>> + struct vring_packed *pq_vring = &dev->vrings.packed[queue_sel];
>> struct vhost_vring_addr addr = {
>> .index = queue_sel,
>> .log_guest_addr = 0,
>> @@ -299,18 +300,6 @@ virtio_user_dev_init_max_queue_pairs(struct
>> virtio_user_dev *dev, uint32_t user_
>> return ret;
>> }
>>
>> - if (dev->max_queue_pairs > VIRTIO_MAX_VIRTQUEUE_PAIRS) {
>> - /*
>> - * If the device supports control queue, the control queue
>> - * index is max_virtqueue_pairs * 2. Disable MQ if it happens.
>> - */
>> - PMD_DRV_LOG(ERR, "(%s) Device advertises too many queues (%u,
>> max supported %u)",
>> - dev->path, dev->max_queue_pairs,
>> VIRTIO_MAX_VIRTQUEUE_PAIRS);
>> - dev->max_queue_pairs = 1;
>> -
>> - return -1;
>> - }
>> -
>> return 0;
>> }
>>
>> @@ -579,6 +568,86 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
>> return 0;
>> }
>>
>> +static int
>> +virtio_user_alloc_vrings(struct virtio_user_dev *dev)
>> +{
>> + int i, size, nr_vrings;
>> +
>> + nr_vrings = dev->max_queue_pairs * 2;
>> + if (dev->hw_cvq)
>> + nr_vrings++;
>> +
>> + dev->callfds = rte_zmalloc("virtio_user_dev", nr_vrings *
>> sizeof(*dev->callfds), 0);
>> + if (!dev->callfds) {
>> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc callfds", dev->path);
>> + return -1;
>> + }
>> +
>> + dev->kickfds = rte_zmalloc("virtio_user_dev", nr_vrings *
>> sizeof(*dev->kickfds), 0);
>> + if (!dev->kickfds) {
>> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc kickfds", dev->path);
>> + goto free_callfds;
>> + }
>> +
>> + for (i = 0; i < nr_vrings; i++) {
>> + dev->callfds[i] = -1;
>> + dev->kickfds[i] = -1;
>> + }
>> +
>> + size = RTE_MAX(sizeof(*dev->vrings.split), sizeof(*dev-
>>> vrings.packed));
>> + dev->vrings.ptr = rte_zmalloc("virtio_user_dev", nr_vrings * size,
>> 0);
>> + if (!dev->vrings.ptr) {
>> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc vrings metadata", dev-
>>> path);
>> + goto free_kickfds;
>> + }
>> +
>> + dev->packed_queues = rte_zmalloc("virtio_user_dev",
>> + nr_vrings * sizeof(*dev->packed_queues), 0);
>
> Should we pass the info of packed vq or not to save the alloc of
> dev->packed_queues, also to know correct size of dev->vrings.ptr.
That's not ideal because the negotiation haven't taken place yet with
the Virtio layer, but it should be doable for packed ring specifically
since it is only possible to disable it via the devargs, not at run
time.
Thanks,
Maxime
^ permalink raw reply [flat|nested] 48+ messages in thread
end of thread, other threads:[~2023-02-07 14:14 UTC | newest]
Thread overview: 48+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
2022-11-30 15:56 ` [PATCH v1 01/21] net/virtio: move CVQ code into a dedicated file Maxime Coquelin
2023-01-30 7:50 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 02/21] net/virtio: introduce notify callback for control queue Maxime Coquelin
2023-01-30 7:51 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 03/21] net/virtio: virtqueue headers alloc refactoring Maxime Coquelin
2023-01-30 7:51 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 04/21] net/virtio: remove port ID info from Rx queue Maxime Coquelin
2023-01-30 7:51 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 05/21] net/virtio: remove unused fields in Tx queue struct Maxime Coquelin
2023-01-30 7:51 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 06/21] net/virtio: remove unused queue ID field in Rx queue Maxime Coquelin
2023-01-30 7:52 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 07/21] net/virtio: remove unused Port ID in control queue Maxime Coquelin
2023-01-30 7:52 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 08/21] net/virtio: move vring memzone to virtqueue struct Maxime Coquelin
2023-01-30 7:52 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 09/21] net/virtio: refactor indirect desc headers init Maxime Coquelin
2023-01-30 7:52 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 10/21] net/virtio: alloc Rx SW ring only if vectorized path Maxime Coquelin
2023-01-30 7:49 ` Xia, Chenbo
2023-02-07 10:12 ` Maxime Coquelin
2022-11-30 15:56 ` [PATCH v1 11/21] net/virtio: extract virtqueue init from virtio queue init Maxime Coquelin
2023-01-30 7:53 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 12/21] net/virtio-user: fix device starting failure handling Maxime Coquelin
2023-01-31 5:20 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 13/21] net/virtio-user: simplify queues setup Maxime Coquelin
2023-01-31 5:21 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 14/21] net/virtio-user: use proper type for number of queue pairs Maxime Coquelin
2023-01-31 5:21 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 15/21] net/virtio-user: get max number of queue pairs from device Maxime Coquelin
2023-01-31 5:21 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 16/21] net/virtio-user: allocate shadow control queue Maxime Coquelin
2023-01-31 5:21 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 17/21] net/virtio-user: send shadow virtqueue info to the backend Maxime Coquelin
2023-01-31 5:22 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 18/21] net/virtio-user: add new callback to enable control queue Maxime Coquelin
2023-01-31 5:22 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 19/21] net/virtio-user: forward control messages to shadow queue Maxime Coquelin
2022-11-30 16:54 ` Stephen Hemminger
2022-12-06 12:58 ` Maxime Coquelin
2022-11-30 15:56 ` [PATCH v1 20/21] net/virtio-user: advertize control VQ support with vDPA Maxime Coquelin
2023-01-31 5:24 ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 21/21] net/virtio-user: remove max queues limitation Maxime Coquelin
2023-01-31 5:19 ` Xia, Chenbo
2023-02-07 14:14 ` Maxime Coquelin
2023-01-30 5:57 ` [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Xia, Chenbo
2023-02-07 10:08 ` Maxime Coquelin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).