DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/8] net/virtio: fix queue reconfigure issue
@ 2016-11-03 16:09 Yuanhan Liu
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 1/8] net/virtio: revert "virtio: fix restart" Yuanhan Liu
                   ` (8 more replies)
  0 siblings, 9 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-03 16:09 UTC (permalink / raw)
  To: dev; +Cc: Ilya Maximets

This patchset fixes an issue (virtio/vhost become broken when queue setup
is done 2 or more times, check patch 2 for great details) reported by Ilya
at the late stage of last release. I sensed it would be some amounts of
code work, that I delayed it to this release. However, I failed to make
the fix in the early stage of this release, that we come to the same
situation again: we are being at the late stage of v16.11 :(

However, honestly, I don't want to miss it again in this release, as it
is hard to backport such fix to a stable release. It's not a bug can
be fixed by few lines of code. The virtio init logic is quite wrong that
I need change quite many places to fix it. Meanwhile, I have done my best
to keep the change being as minimal as possible.

Besides that, judging that v16.11 would be a LTS and it's such an severe
bug that should be fixed ASAP (at least, we should make it work in a LTS
release), I then tried hard today to make this patchset, with the hope we
could fix this severe issue at this release.

The issue is decribed in length in patch 4 "net/virtio: allocate queue
at init stage".

Again, it's not a bug can be fixed by one patch, that you see no one
"fix" tag in none of those patches. All below patches work together
to fix this issue.

Thanks.

	--yliu


---
Yuanhan Liu (8):
  net/virtio: revert "virtio: fix restart"
  net/virtio: simplify queue memzone name
  net/virtio: simplify queue allocation
  net/virtio: allocate queue at init stage
  net/virtio: initiate vring at init stage
  net/virtio: move queue configure code to proper place
  net/virtio: complete init stage at the right place
  net/virtio: remove started field

 drivers/net/virtio/virtio_ethdev.c | 182 ++++++++++++++++-----------
 drivers/net/virtio/virtio_ethdev.h |  10 --
 drivers/net/virtio/virtio_pci.h    |   3 +-
 drivers/net/virtio/virtio_rxtx.c   | 247 ++++++++++++++-----------------------
 drivers/net/virtio/virtqueue.h     |   7 ++
 5 files changed, 208 insertions(+), 241 deletions(-)

-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 1/8] net/virtio: revert "virtio: fix restart"
  2016-11-03 16:09 [dpdk-dev] [PATCH 0/8] net/virtio: fix queue reconfigure issue Yuanhan Liu
@ 2016-11-03 16:09 ` Yuanhan Liu
  2016-11-03 20:36   ` Maxime Coquelin
  2016-11-04  8:10   ` Maxime Coquelin
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 2/8] net/virtio: simplify queue memzone name Yuanhan Liu
                   ` (7 subsequent siblings)
  8 siblings, 2 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-03 16:09 UTC (permalink / raw)
  To: dev; +Cc: stable, Ilya Maximets

This reverts commit 9a0615af7746 ("virtio: fix restart"); conflict is
manually addressed.

Kyle reported an issue with above commit

    qemu-kvm: Guest moved used index from 5 to 1

with following steps,

    1) Start my virtio interfaces
    2) Send some traffic into/out of the interfaces
    3) Stop the interfaces
    4) Start the interfaces
    5) Send some more traffic

And here are some quotes from Kyle's analysis,

    Prior to the patch, if an interface were stopped then started, without
    restarting the application, the queues would be left as-is, because
    hw->started would be set to 1. Now, calling stop sets hw->started to 0,
    which means the next call to start will "touch the queues". This is the
    unintended side-effect that causes the problem.

Fixes: 9a0615af7746 ("virtio: fix restart")

Cc: Jianfeng Tan <jianfeng.tan@intel.com>
Cc: <stable@dpdk.org>
Reported-by: Kyle Larose <klarose@sandvine.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_ethdev.c | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index b388134..5815875 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -550,9 +550,6 @@ virtio_dev_close(struct rte_eth_dev *dev)
 
 	PMD_INIT_LOG(DEBUG, "virtio_dev_close");
 
-	if (hw->started == 1)
-		virtio_dev_stop(dev);
-
 	if (hw->cvq)
 		virtio_dev_queue_release(hw->cvq->vq);
 
@@ -560,6 +557,7 @@ virtio_dev_close(struct rte_eth_dev *dev)
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
 		vtpci_irq_config(hw, VIRTIO_MSI_NO_VECTOR);
 	vtpci_reset(hw);
+	hw->started = 0;
 	virtio_dev_free_mbufs(dev);
 	virtio_free_queues(dev);
 }
@@ -1296,15 +1294,17 @@ static int
 eth_virtio_dev_uninit(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev;
+	struct virtio_hw *hw = eth_dev->data->dev_private;
 
 	PMD_INIT_FUNC_TRACE();
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
 		return -EPERM;
 
-	/* Close it anyway since there's no way to know if closed */
-	virtio_dev_close(eth_dev);
-
+	if (hw->started == 1) {
+		virtio_dev_stop(eth_dev);
+		virtio_dev_close(eth_dev);
+	}
 	pci_dev = eth_dev->pci_dev;
 
 	eth_dev->dev_ops = NULL;
@@ -1543,12 +1543,9 @@ static void
 virtio_dev_stop(struct rte_eth_dev *dev)
 {
 	struct rte_eth_link link;
-	struct virtio_hw *hw = dev->data->dev_private;
 
 	PMD_INIT_LOG(DEBUG, "stop");
 
-	hw->started = 0;
-
 	if (dev->data->dev_conf.intr_conf.lsc)
 		rte_intr_disable(&dev->pci_dev->intr_handle);
 
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 2/8] net/virtio: simplify queue memzone name
  2016-11-03 16:09 [dpdk-dev] [PATCH 0/8] net/virtio: fix queue reconfigure issue Yuanhan Liu
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 1/8] net/virtio: revert "virtio: fix restart" Yuanhan Liu
@ 2016-11-03 16:09 ` Yuanhan Liu
  2016-11-03 20:41   ` Maxime Coquelin
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 3/8] net/virtio: simplify queue allocation Yuanhan Liu
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-03 16:09 UTC (permalink / raw)
  To: dev; +Cc: Ilya Maximets

Instead of setting up a queue memzone name like "port0_rxq0", "port0_txq0",
it could be simplified a bit to something like "port0_vq0", "port0_vq1" ...

Meanwhile, the code is also simplified a bit.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_ethdev.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 5815875..d082df5 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -312,7 +312,6 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 	struct virtnet_tx *txvq = NULL;
 	struct virtnet_ctl *cvq = NULL;
 	struct virtqueue *vq;
-	const char *queue_names[] = {"rvq", "txq", "cvq"};
 	size_t sz_vq, sz_q = 0, sz_hdr_mz = 0;
 	void *sw_ring = NULL;
 	int ret;
@@ -335,8 +334,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	snprintf(vq_name, sizeof(vq_name), "port%d_%s%d",
-		 dev->data->port_id, queue_names[queue_type], queue_idx);
+	snprintf(vq_name, sizeof(vq_name), "port%d_vq%d",
+		 dev->data->port_id, vtpci_queue_idx);
 
 	sz_vq = RTE_ALIGN_CEIL(sizeof(*vq) +
 				vq_size * sizeof(struct vq_desc_extra),
@@ -398,9 +397,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		     (uint64_t)(uintptr_t)mz->addr);
 
 	if (sz_hdr_mz) {
-		snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_%s%d_hdr",
-			 dev->data->port_id, queue_names[queue_type],
-			 queue_idx);
+		snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
+			 dev->data->port_id, vtpci_queue_idx);
 		hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
 						     socket_id, 0,
 						     RTE_CACHE_LINE_SIZE);
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 3/8] net/virtio: simplify queue allocation
  2016-11-03 16:09 [dpdk-dev] [PATCH 0/8] net/virtio: fix queue reconfigure issue Yuanhan Liu
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 1/8] net/virtio: revert "virtio: fix restart" Yuanhan Liu
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 2/8] net/virtio: simplify queue memzone name Yuanhan Liu
@ 2016-11-03 16:09 ` Yuanhan Liu
  2016-11-03 20:48   ` Maxime Coquelin
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage Yuanhan Liu
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-03 16:09 UTC (permalink / raw)
  To: dev; +Cc: Ilya Maximets

Let rxq/txq/cq be the union field of the virtqueue struct. This would
simplifies the vq allocation a bit: we don't need calculate the vq_size
any more based on the queue time.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_ethdev.c | 18 +++++++-----------
 drivers/net/virtio/virtqueue.h     |  7 +++++++
 2 files changed, 14 insertions(+), 11 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index d082df5..5a2c14b 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -312,7 +312,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 	struct virtnet_tx *txvq = NULL;
 	struct virtnet_ctl *cvq = NULL;
 	struct virtqueue *vq;
-	size_t sz_vq, sz_q = 0, sz_hdr_mz = 0;
+	size_t sz_hdr_mz = 0;
 	void *sw_ring = NULL;
 	int ret;
 
@@ -337,25 +337,21 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 	snprintf(vq_name, sizeof(vq_name), "port%d_vq%d",
 		 dev->data->port_id, vtpci_queue_idx);
 
-	sz_vq = RTE_ALIGN_CEIL(sizeof(*vq) +
+	size = RTE_ALIGN_CEIL(sizeof(*vq) +
 				vq_size * sizeof(struct vq_desc_extra),
 				RTE_CACHE_LINE_SIZE);
-	if (queue_type == VTNET_RQ) {
-		sz_q = sz_vq + sizeof(*rxvq);
-	} else if (queue_type == VTNET_TQ) {
-		sz_q = sz_vq + sizeof(*txvq);
+	if (queue_type == VTNET_TQ) {
 		/*
 		 * For each xmit packet, allocate a virtio_net_hdr
 		 * and indirect ring elements
 		 */
 		sz_hdr_mz = vq_size * sizeof(struct virtio_tx_region);
 	} else if (queue_type == VTNET_CQ) {
-		sz_q = sz_vq + sizeof(*cvq);
 		/* Allocate a page for control vq command, data and status */
 		sz_hdr_mz = PAGE_SIZE;
 	}
 
-	vq = rte_zmalloc_socket(vq_name, sz_q, RTE_CACHE_LINE_SIZE, socket_id);
+	vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, socket_id);
 	if (vq == NULL) {
 		PMD_INIT_LOG(ERR, "can not allocate vq");
 		return -ENOMEM;
@@ -425,14 +421,14 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		}
 
 		vq->sw_ring = sw_ring;
-		rxvq = (struct virtnet_rx *)RTE_PTR_ADD(vq, sz_vq);
+		rxvq = &vq->rxq;
 		rxvq->vq = vq;
 		rxvq->port_id = dev->data->port_id;
 		rxvq->queue_id = queue_idx;
 		rxvq->mz = mz;
 		*pvq = rxvq;
 	} else if (queue_type == VTNET_TQ) {
-		txvq = (struct virtnet_tx *)RTE_PTR_ADD(vq, sz_vq);
+		txvq = &vq->txq;
 		txvq->vq = vq;
 		txvq->port_id = dev->data->port_id;
 		txvq->queue_id = queue_idx;
@@ -442,7 +438,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 
 		*pvq = txvq;
 	} else if (queue_type == VTNET_CQ) {
-		cvq = (struct virtnet_ctl *)RTE_PTR_ADD(vq, sz_vq);
+		cvq = &vq->cq;
 		cvq->vq = vq;
 		cvq->mz = mz;
 		cvq->virtio_net_hdr_mz = hdr_mz;
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index ef0027b..bbeb2f2 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -44,6 +44,7 @@
 #include "virtio_pci.h"
 #include "virtio_ring.h"
 #include "virtio_logs.h"
+#include "virtio_rxtx.h"
 
 struct rte_mbuf;
 
@@ -191,6 +192,12 @@ struct virtqueue {
 	void *vq_ring_virt_mem;  /**< linear address of vring*/
 	unsigned int vq_ring_size;
 
+	union {
+		struct virtnet_rx rxq;
+		struct virtnet_tx txq;
+		struct virtnet_ctl cq;
+	};
+
 	phys_addr_t vq_ring_mem; /**< physical address of vring,
 				  * or virtual address for virtio_user. */
 
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage
  2016-11-03 16:09 [dpdk-dev] [PATCH 0/8] net/virtio: fix queue reconfigure issue Yuanhan Liu
                   ` (2 preceding siblings ...)
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 3/8] net/virtio: simplify queue allocation Yuanhan Liu
@ 2016-11-03 16:09 ` Yuanhan Liu
  2016-11-03 21:11   ` Maxime Coquelin
                     ` (2 more replies)
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 5/8] net/virtio: initiate vring " Yuanhan Liu
                   ` (4 subsequent siblings)
  8 siblings, 3 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-03 16:09 UTC (permalink / raw)
  To: dev; +Cc: Ilya Maximets

Queue allocation should be done once, since the queue related info (such
as vring addreess) will only be informed to the vhost-user backend once
without virtio device reset.

That means, if you allocate queues again after the vhost-user negotiation,
the vhost-user backend will not be informed any more. Leading to a state
that the vring info mismatches between virtio PMD driver and vhost-backend:
the driver switches to the new address has just been allocated, while the
vhost-backend still sticks to the old address has been assigned in the init
stage.

Unfortunately, that is exactly how the virtio driver is coded so far: queue
allocation is done at queue_setup stage (when rte_eth_tx/rx_queue_setup is
invoked). This is wrong, because queue_setup can be invoked several times.
For example,

    $ start_testpmd.sh ... --txq=1 --rxq=1 ...
    > port stop 0
    > port config all txq 1 # just trigger the queue_setup callback again
    > port config all rxq 1
    > port start 0

The right way to do is allocate the queues in the init stage, so that the
vring info could be persistent with the vhost-user backend.

Besides that, we should allocate max_queue pairs the device supports, but
not nr queue pairs firstly configured, to make following case work.

    $ start_testpmd.sh ... --txq=1 --rxq=1 ...
    > port stop 0
    > port config all txq 2
    > port config all rxq 2
    > port start 0

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_ethdev.c | 105 +++++++++++++++++++++++--------------
 drivers/net/virtio/virtio_ethdev.h |   8 ---
 drivers/net/virtio/virtio_pci.h    |   2 +
 drivers/net/virtio/virtio_rxtx.c   |  38 +++++++-------
 4 files changed, 85 insertions(+), 68 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 5a2c14b..253bcb5 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -295,13 +295,19 @@ virtio_dev_queue_release(struct virtqueue *vq)
 	}
 }
 
-int virtio_dev_queue_setup(struct rte_eth_dev *dev,
-			int queue_type,
-			uint16_t queue_idx,
-			uint16_t vtpci_queue_idx,
-			uint16_t nb_desc,
-			unsigned int socket_id,
-			void **pvq)
+static int
+virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx)
+{
+	if (vtpci_queue_idx == hw->max_queue_pairs * 2)
+		return VTNET_CQ;
+	else if (vtpci_queue_idx % 2 == 0)
+		return VTNET_RQ;
+	else
+		return VTNET_TQ;
+}
+
+static int
+virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
 {
 	char vq_name[VIRTQUEUE_MAX_NAME_SZ];
 	char vq_hdr_name[VIRTQUEUE_MAX_NAME_SZ];
@@ -314,6 +320,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 	struct virtqueue *vq;
 	size_t sz_hdr_mz = 0;
 	void *sw_ring = NULL;
+	int queue_type = virtio_get_queue_type(hw, vtpci_queue_idx);
 	int ret;
 
 	PMD_INIT_LOG(DEBUG, "setting up queue: %u", vtpci_queue_idx);
@@ -351,18 +358,18 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		sz_hdr_mz = PAGE_SIZE;
 	}
 
-	vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, socket_id);
+	vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE,
+				SOCKET_ID_ANY);
 	if (vq == NULL) {
 		PMD_INIT_LOG(ERR, "can not allocate vq");
 		return -ENOMEM;
 	}
+	hw->vqs[vtpci_queue_idx] = vq;
+
 	vq->hw = hw;
 	vq->vq_queue_index = vtpci_queue_idx;
 	vq->vq_nentries = vq_size;
-
-	if (nb_desc == 0 || nb_desc > vq_size)
-		nb_desc = vq_size;
-	vq->vq_free_cnt = nb_desc;
+	vq->vq_free_cnt = vq_size;
 
 	/*
 	 * Reserve a memzone for vring elements
@@ -372,7 +379,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 	PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d",
 		     size, vq->vq_ring_size);
 
-	mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size, socket_id,
+	mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
+					 SOCKET_ID_ANY,
 					 0, VIRTIO_PCI_VRING_ALIGN);
 	if (mz == NULL) {
 		if (rte_errno == EEXIST)
@@ -396,7 +404,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
 			 dev->data->port_id, vtpci_queue_idx);
 		hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
-						     socket_id, 0,
+						     SOCKET_ID_ANY, 0,
 						     RTE_CACHE_LINE_SIZE);
 		if (hdr_mz == NULL) {
 			if (rte_errno == EEXIST)
@@ -413,7 +421,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 			       sizeof(vq->sw_ring[0]);
 
 		sw_ring = rte_zmalloc_socket("sw_ring", sz_sw,
-					     RTE_CACHE_LINE_SIZE, socket_id);
+				RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 		if (!sw_ring) {
 			PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
 			ret = -ENOMEM;
@@ -424,19 +432,14 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		rxvq = &vq->rxq;
 		rxvq->vq = vq;
 		rxvq->port_id = dev->data->port_id;
-		rxvq->queue_id = queue_idx;
 		rxvq->mz = mz;
-		*pvq = rxvq;
 	} else if (queue_type == VTNET_TQ) {
 		txvq = &vq->txq;
 		txvq->vq = vq;
 		txvq->port_id = dev->data->port_id;
-		txvq->queue_id = queue_idx;
 		txvq->mz = mz;
 		txvq->virtio_net_hdr_mz = hdr_mz;
 		txvq->virtio_net_hdr_mem = hdr_mz->phys_addr;
-
-		*pvq = txvq;
 	} else if (queue_type == VTNET_CQ) {
 		cvq = &vq->cq;
 		cvq->vq = vq;
@@ -444,7 +447,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		cvq->virtio_net_hdr_mz = hdr_mz;
 		cvq->virtio_net_hdr_mem = hdr_mz->phys_addr;
 		memset(cvq->virtio_net_hdr_mz->addr, 0, PAGE_SIZE);
-		*pvq = cvq;
+
+		hw->cvq = cvq;
 	}
 
 	/* For virtio_user case (that is when dev->pci_dev is NULL), we use
@@ -502,23 +506,45 @@ fail_q_alloc:
 }
 
 static int
-virtio_dev_cq_queue_setup(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx,
-		uint32_t socket_id)
+virtio_alloc_queues(struct rte_eth_dev *dev)
 {
-	struct virtnet_ctl *cvq;
-	int ret;
 	struct virtio_hw *hw = dev->data->dev_private;
+	uint16_t nr_vq = hw->max_queue_pairs * 2;
+	uint16_t i;
+	int ret;
 
-	PMD_INIT_FUNC_TRACE();
-	ret = virtio_dev_queue_setup(dev, VTNET_CQ, VTNET_SQ_CQ_QUEUE_IDX,
-			vtpci_queue_idx, 0, socket_id, (void **)&cvq);
-	if (ret < 0) {
-		PMD_INIT_LOG(ERR, "control vq initialization failed");
-		return ret;
+	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
+		nr_vq += 1;
+
+	hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0);
+	if (!hw->vqs) {
+		PMD_INIT_LOG(ERR, "failed to allocate vqs");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < nr_vq; i++) {
+		ret = virtio_init_queue(dev, i);
+		if (ret < 0)
+			goto cleanup;
 	}
 
-	hw->cvq = cvq;
 	return 0;
+
+cleanup:
+	/*
+	 * ctrl queue is the last queue; if we go here, it means the ctrl
+	 * queue is not allocated, that we can do no cleanup for it here.
+	 */
+	while (i > 0) {
+		i--;
+		if (i % 2 == 0)
+			virtio_dev_rx_queue_release(&hw->vqs[i]->rxq);
+		else
+			virtio_dev_tx_queue_release(&hw->vqs[i]->txq);
+	}
+	rte_free(hw->vqs);
+
+	return ret;
 }
 
 static void
@@ -1141,6 +1167,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 	struct virtio_net_config *config;
 	struct virtio_net_config local_config;
 	struct rte_pci_device *pci_dev = eth_dev->pci_dev;
+	int ret;
 
 	/* Reset the device although not necessary at startup */
 	vtpci_reset(hw);
@@ -1222,6 +1249,10 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 		hw->max_queue_pairs = 1;
 	}
 
+	ret = virtio_alloc_queues(eth_dev);
+	if (ret < 0)
+		return ret;
+
 	if (pci_dev)
 		PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
 			eth_dev->data->port_id, pci_dev->id.vendor_id,
@@ -1390,15 +1421,9 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
-	/* Setup and start control queue */
-	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
-		ret = virtio_dev_cq_queue_setup(dev,
-			hw->max_queue_pairs * 2,
-			SOCKET_ID_ANY);
-		if (ret < 0)
-			return ret;
+	/* start control queue */
+	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
 		virtio_dev_cq_start(dev);
-	}
 
 	hw->vlan_strip = rxmode->hw_vlan_strip;
 
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index de33b32..5db8632 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -80,14 +80,6 @@ void virtio_dev_cq_start(struct rte_eth_dev *dev);
  */
 void virtio_dev_rxtx_start(struct rte_eth_dev *dev);
 
-int virtio_dev_queue_setup(struct rte_eth_dev *dev,
-			int queue_type,
-			uint16_t queue_idx,
-			uint16_t vtpci_queue_idx,
-			uint16_t nb_desc,
-			unsigned int socket_id,
-			void **pvq);
-
 void virtio_dev_queue_release(struct virtqueue *vq);
 
 int  virtio_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
index 0c5ed31..f63f76c 100644
--- a/drivers/net/virtio/virtio_pci.h
+++ b/drivers/net/virtio/virtio_pci.h
@@ -264,6 +264,8 @@ struct virtio_hw {
 	struct virtio_net_config *dev_cfg;
 	const struct virtio_pci_ops *vtpci_ops;
 	void	    *virtio_user_dev;
+
+	struct virtqueue **vqs;
 };
 
 /*
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index b4c4aa4..fb703d2 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -530,24 +530,24 @@ int
 virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			uint16_t queue_idx,
 			uint16_t nb_desc,
-			unsigned int socket_id,
+			unsigned int socket_id __rte_unused,
 			__rte_unused const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp)
 {
 	uint16_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_RQ_QUEUE_IDX;
+	struct virtio_hw *hw = dev->data->dev_private;
+	struct virtqueue *vq = hw->vqs[vtpci_queue_idx];
 	struct virtnet_rx *rxvq;
-	int ret;
 
 	PMD_INIT_FUNC_TRACE();
-	ret = virtio_dev_queue_setup(dev, VTNET_RQ, queue_idx, vtpci_queue_idx,
-			nb_desc, socket_id, (void **)&rxvq);
-	if (ret < 0) {
-		PMD_INIT_LOG(ERR, "rvq initialization failed");
-		return ret;
-	}
 
-	/* Create mempool for rx mbuf allocation */
+	if (nb_desc == 0 || nb_desc > vq->vq_nentries)
+		nb_desc = vq->vq_nentries;
+	vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
+
+	rxvq = &vq->rxq;
 	rxvq->mpool = mp;
+	rxvq->queue_id = queue_idx;
 
 	dev->data->rx_queues[queue_idx] = rxvq;
 
@@ -613,27 +613,25 @@ int
 virtio_dev_tx_queue_setup(struct rte_eth_dev *dev,
 			uint16_t queue_idx,
 			uint16_t nb_desc,
-			unsigned int socket_id,
+			unsigned int socket_id __rte_unused,
 			const struct rte_eth_txconf *tx_conf)
 {
 	uint8_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_TQ_QUEUE_IDX;
+	struct virtio_hw *hw = dev->data->dev_private;
+	struct virtqueue *vq = hw->vqs[vtpci_queue_idx];
 	struct virtnet_tx *txvq;
-	struct virtqueue *vq;
 	uint16_t tx_free_thresh;
-	int ret;
 
 	PMD_INIT_FUNC_TRACE();
 
-
 	virtio_update_rxtx_handler(dev, tx_conf);
 
-	ret = virtio_dev_queue_setup(dev, VTNET_TQ, queue_idx, vtpci_queue_idx,
-			nb_desc, socket_id, (void **)&txvq);
-	if (ret < 0) {
-		PMD_INIT_LOG(ERR, "tvq initialization failed");
-		return ret;
-	}
-	vq = txvq->vq;
+	if (nb_desc == 0 || nb_desc > vq->vq_nentries)
+		nb_desc = vq->vq_nentries;
+	vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
+
+	txvq = &vq->txq;
+	txvq->queue_id = queue_idx;
 
 	tx_free_thresh = tx_conf->tx_free_thresh;
 	if (tx_free_thresh == 0)
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 5/8] net/virtio: initiate vring at init stage
  2016-11-03 16:09 [dpdk-dev] [PATCH 0/8] net/virtio: fix queue reconfigure issue Yuanhan Liu
                   ` (3 preceding siblings ...)
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage Yuanhan Liu
@ 2016-11-03 16:09 ` Yuanhan Liu
  2016-11-04  8:34   ` Maxime Coquelin
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 6/8] net/virtio: move queue configure code to proper place Yuanhan Liu
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-03 16:09 UTC (permalink / raw)
  To: dev; +Cc: Ilya Maximets

virtio_dev_vring_start() is actually doing the vring initiation job.
And the vring initiation job should be done at the dev init stage, as
stated with great details in former commit.

So move it there, and rename it to virtio_init_vring().

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_ethdev.c | 32 +++++++++++++++++++++++++++++++-
 drivers/net/virtio/virtio_rxtx.c   | 32 --------------------------------
 2 files changed, 31 insertions(+), 33 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 253bcb5..b80b6f5 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -306,6 +306,35 @@ virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx)
 		return VTNET_TQ;
 }
 
+static void
+virtio_init_vring(struct virtqueue *vq)
+{
+	int size = vq->vq_nentries;
+	struct vring *vr = &vq->vq_ring;
+	uint8_t *ring_mem = vq->vq_ring_virt_mem;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/*
+	 * Reinitialise since virtio port might have been stopped and restarted
+	 */
+	memset(ring_mem, 0, vq->vq_ring_size);
+	vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
+	vq->vq_used_cons_idx = 0;
+	vq->vq_desc_head_idx = 0;
+	vq->vq_avail_idx = 0;
+	vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
+	vq->vq_free_cnt = vq->vq_nentries;
+	memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
+
+	vring_desc_init(vr->desc, size);
+
+	/*
+	 * Disable device(host) interrupting guest
+	 */
+	virtqueue_disable_intr(vq);
+}
+
 static int
 virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
 {
@@ -369,7 +398,6 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
 	vq->hw = hw;
 	vq->vq_queue_index = vtpci_queue_idx;
 	vq->vq_nentries = vq_size;
-	vq->vq_free_cnt = vq_size;
 
 	/*
 	 * Reserve a memzone for vring elements
@@ -400,6 +428,8 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
 	PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: 0x%" PRIx64,
 		     (uint64_t)(uintptr_t)mz->addr);
 
+	virtio_init_vring(vq);
+
 	if (sz_hdr_mz) {
 		snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
 			 dev->data->port_id, vtpci_queue_idx);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index fb703d2..bce0663 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -378,42 +378,12 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
 	vq_update_avail_ring(vq, head_idx);
 }
 
-static void
-virtio_dev_vring_start(struct virtqueue *vq)
-{
-	int size = vq->vq_nentries;
-	struct vring *vr = &vq->vq_ring;
-	uint8_t *ring_mem = vq->vq_ring_virt_mem;
-
-	PMD_INIT_FUNC_TRACE();
-
-	/*
-	 * Reinitialise since virtio port might have been stopped and restarted
-	 */
-	memset(vq->vq_ring_virt_mem, 0, vq->vq_ring_size);
-	vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
-	vq->vq_used_cons_idx = 0;
-	vq->vq_desc_head_idx = 0;
-	vq->vq_avail_idx = 0;
-	vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
-	vq->vq_free_cnt = vq->vq_nentries;
-	memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
-
-	vring_desc_init(vr->desc, size);
-
-	/*
-	 * Disable device(host) interrupting guest
-	 */
-	virtqueue_disable_intr(vq);
-}
-
 void
 virtio_dev_cq_start(struct rte_eth_dev *dev)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
 
 	if (hw->cvq && hw->cvq->vq) {
-		virtio_dev_vring_start(hw->cvq->vq);
 		VIRTQUEUE_DUMP((struct virtqueue *)hw->cvq->vq);
 	}
 }
@@ -441,7 +411,6 @@ virtio_dev_rxtx_start(struct rte_eth_dev *dev)
 		int error, nbufs;
 		struct rte_mbuf *m;
 
-		virtio_dev_vring_start(vq);
 		if (rxvq->mpool == NULL) {
 			rte_exit(EXIT_FAILURE,
 				"Cannot allocate mbufs for rx virtqueue");
@@ -499,7 +468,6 @@ virtio_dev_rxtx_start(struct rte_eth_dev *dev)
 		struct virtnet_tx *txvq = dev->data->tx_queues[i];
 		struct virtqueue *vq = txvq->vq;
 
-		virtio_dev_vring_start(vq);
 		if (hw->use_simple_rxtx) {
 			uint16_t mid_idx  = vq->vq_nentries >> 1;
 
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 6/8] net/virtio: move queue configure code to proper place
  2016-11-03 16:09 [dpdk-dev] [PATCH 0/8] net/virtio: fix queue reconfigure issue Yuanhan Liu
                   ` (4 preceding siblings ...)
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 5/8] net/virtio: initiate vring " Yuanhan Liu
@ 2016-11-03 16:09 ` Yuanhan Liu
  2016-11-04  8:39   ` Maxime Coquelin
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 7/8] net/virtio: complete init stage at the right place Yuanhan Liu
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-03 16:09 UTC (permalink / raw)
  To: dev; +Cc: Ilya Maximets

The only piece of code of virtio_dev_rxtx_start() is actually doing
queue configure/setup work. So, move it to corresponding queue_setup
callback.

Once that is done, virtio_dev_rxtx_start() becomes an empty function,
thus it's being removed.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_ethdev.c |   2 -
 drivers/net/virtio/virtio_ethdev.h |   2 -
 drivers/net/virtio/virtio_rxtx.c   | 187 ++++++++++++++++---------------------
 3 files changed, 79 insertions(+), 112 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index b80b6f5..37459e7 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1502,8 +1502,6 @@ virtio_dev_start(struct rte_eth_dev *dev)
 	if (hw->started)
 		return 0;
 
-	/* Do final configuration before rx/tx engine starts */
-	virtio_dev_rxtx_start(dev);
 	vtpci_reinit_complete(hw);
 
 	hw->started = 1;
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index 5db8632..1396c6e 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -78,8 +78,6 @@ void virtio_dev_cq_start(struct rte_eth_dev *dev);
 /*
  * RX/TX function prototypes
  */
-void virtio_dev_rxtx_start(struct rte_eth_dev *dev);
-
 void virtio_dev_queue_release(struct virtqueue *vq);
 
 int  virtio_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index bce0663..4cb2ce7 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -388,112 +388,6 @@ virtio_dev_cq_start(struct rte_eth_dev *dev)
 	}
 }
 
-void
-virtio_dev_rxtx_start(struct rte_eth_dev *dev)
-{
-	/*
-	 * Start receive and transmit vrings
-	 * -	Setup vring structure for all queues
-	 * -	Initialize descriptor for the rx vring
-	 * -	Allocate blank mbufs for the each rx descriptor
-	 *
-	 */
-	uint16_t i;
-	uint16_t desc_idx;
-	struct virtio_hw *hw = dev->data->dev_private;
-
-	PMD_INIT_FUNC_TRACE();
-
-	/* Start rx vring. */
-	for (i = 0; i < dev->data->nb_rx_queues; i++) {
-		struct virtnet_rx *rxvq = dev->data->rx_queues[i];
-		struct virtqueue *vq = rxvq->vq;
-		int error, nbufs;
-		struct rte_mbuf *m;
-
-		if (rxvq->mpool == NULL) {
-			rte_exit(EXIT_FAILURE,
-				"Cannot allocate mbufs for rx virtqueue");
-		}
-
-		/* Allocate blank mbufs for the each rx descriptor */
-		nbufs = 0;
-		error = ENOSPC;
-
-		if (hw->use_simple_rxtx) {
-			for (desc_idx = 0; desc_idx < vq->vq_nentries;
-			     desc_idx++) {
-				vq->vq_ring.avail->ring[desc_idx] = desc_idx;
-				vq->vq_ring.desc[desc_idx].flags =
-					VRING_DESC_F_WRITE;
-			}
-		}
-
-		memset(&rxvq->fake_mbuf, 0, sizeof(rxvq->fake_mbuf));
-		for (desc_idx = 0; desc_idx < RTE_PMD_VIRTIO_RX_MAX_BURST;
-		     desc_idx++) {
-			vq->sw_ring[vq->vq_nentries + desc_idx] =
-				&rxvq->fake_mbuf;
-		}
-
-		while (!virtqueue_full(vq)) {
-			m = rte_mbuf_raw_alloc(rxvq->mpool);
-			if (m == NULL)
-				break;
-
-			/******************************************
-			*         Enqueue allocated buffers        *
-			*******************************************/
-			if (hw->use_simple_rxtx)
-				error = virtqueue_enqueue_recv_refill_simple(vq, m);
-			else
-				error = virtqueue_enqueue_recv_refill(vq, m);
-
-			if (error) {
-				rte_pktmbuf_free(m);
-				break;
-			}
-			nbufs++;
-		}
-
-		vq_update_avail_idx(vq);
-
-		PMD_INIT_LOG(DEBUG, "Allocated %d bufs", nbufs);
-
-		VIRTQUEUE_DUMP(vq);
-	}
-
-	/* Start tx vring. */
-	for (i = 0; i < dev->data->nb_tx_queues; i++) {
-		struct virtnet_tx *txvq = dev->data->tx_queues[i];
-		struct virtqueue *vq = txvq->vq;
-
-		if (hw->use_simple_rxtx) {
-			uint16_t mid_idx  = vq->vq_nentries >> 1;
-
-			for (desc_idx = 0; desc_idx < mid_idx; desc_idx++) {
-				vq->vq_ring.avail->ring[desc_idx] =
-					desc_idx + mid_idx;
-				vq->vq_ring.desc[desc_idx + mid_idx].next =
-					desc_idx;
-				vq->vq_ring.desc[desc_idx + mid_idx].addr =
-					txvq->virtio_net_hdr_mem +
-					offsetof(struct virtio_tx_region, tx_hdr);
-				vq->vq_ring.desc[desc_idx + mid_idx].len =
-					vq->hw->vtnet_hdr_size;
-				vq->vq_ring.desc[desc_idx + mid_idx].flags =
-					VRING_DESC_F_NEXT;
-				vq->vq_ring.desc[desc_idx].flags = 0;
-			}
-			for (desc_idx = mid_idx; desc_idx < vq->vq_nentries;
-			     desc_idx++)
-				vq->vq_ring.avail->ring[desc_idx] = desc_idx;
-		}
-
-		VIRTQUEUE_DUMP(vq);
-	}
-}
-
 int
 virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			uint16_t queue_idx,
@@ -506,6 +400,9 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	struct virtio_hw *hw = dev->data->dev_private;
 	struct virtqueue *vq = hw->vqs[vtpci_queue_idx];
 	struct virtnet_rx *rxvq;
+	int error, nbufs;
+	struct rte_mbuf *m;
+	uint16_t desc_idx;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -514,13 +411,61 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
 
 	rxvq = &vq->rxq;
-	rxvq->mpool = mp;
 	rxvq->queue_id = queue_idx;
-
+	rxvq->mpool = mp;
+	if (rxvq->mpool == NULL) {
+		rte_exit(EXIT_FAILURE,
+			"Cannot allocate mbufs for rx virtqueue");
+	}
 	dev->data->rx_queues[queue_idx] = rxvq;
 
+
+	/* Allocate blank mbufs for the each rx descriptor */
+	nbufs = 0;
+	error = ENOSPC;
+
+	if (hw->use_simple_rxtx) {
+		for (desc_idx = 0; desc_idx < vq->vq_nentries;
+		     desc_idx++) {
+			vq->vq_ring.avail->ring[desc_idx] = desc_idx;
+			vq->vq_ring.desc[desc_idx].flags =
+				VRING_DESC_F_WRITE;
+		}
+	}
+
+	memset(&rxvq->fake_mbuf, 0, sizeof(rxvq->fake_mbuf));
+	for (desc_idx = 0; desc_idx < RTE_PMD_VIRTIO_RX_MAX_BURST;
+	     desc_idx++) {
+		vq->sw_ring[vq->vq_nentries + desc_idx] =
+			&rxvq->fake_mbuf;
+	}
+
+	while (!virtqueue_full(vq)) {
+		m = rte_mbuf_raw_alloc(rxvq->mpool);
+		if (m == NULL)
+			break;
+
+		/* Enqueue allocated buffers */
+		if (hw->use_simple_rxtx)
+			error = virtqueue_enqueue_recv_refill_simple(vq, m);
+		else
+			error = virtqueue_enqueue_recv_refill(vq, m);
+
+		if (error) {
+			rte_pktmbuf_free(m);
+			break;
+		}
+		nbufs++;
+	}
+
+	vq_update_avail_idx(vq);
+
+	PMD_INIT_LOG(DEBUG, "Allocated %d bufs", nbufs);
+
 	virtio_rxq_vec_setup(rxvq);
 
+	VIRTQUEUE_DUMP(vq);
+
 	return 0;
 }
 
@@ -589,6 +534,7 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	struct virtqueue *vq = hw->vqs[vtpci_queue_idx];
 	struct virtnet_tx *txvq;
 	uint16_t tx_free_thresh;
+	uint16_t desc_idx;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -617,7 +563,32 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev,
 
 	vq->vq_free_thresh = tx_free_thresh;
 
+	if (hw->use_simple_rxtx) {
+		uint16_t mid_idx  = vq->vq_nentries >> 1;
+
+		for (desc_idx = 0; desc_idx < mid_idx; desc_idx++) {
+			vq->vq_ring.avail->ring[desc_idx] =
+				desc_idx + mid_idx;
+			vq->vq_ring.desc[desc_idx + mid_idx].next =
+				desc_idx;
+			vq->vq_ring.desc[desc_idx + mid_idx].addr =
+				txvq->virtio_net_hdr_mem +
+				offsetof(struct virtio_tx_region, tx_hdr);
+			vq->vq_ring.desc[desc_idx + mid_idx].len =
+				vq->hw->vtnet_hdr_size;
+			vq->vq_ring.desc[desc_idx + mid_idx].flags =
+				VRING_DESC_F_NEXT;
+			vq->vq_ring.desc[desc_idx].flags = 0;
+		}
+		for (desc_idx = mid_idx; desc_idx < vq->vq_nentries;
+		     desc_idx++)
+			vq->vq_ring.avail->ring[desc_idx] = desc_idx;
+	}
+
+	VIRTQUEUE_DUMP(vq);
+
 	dev->data->tx_queues[queue_idx] = txvq;
+
 	return 0;
 }
 
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 7/8] net/virtio: complete init stage at the right place
  2016-11-03 16:09 [dpdk-dev] [PATCH 0/8] net/virtio: fix queue reconfigure issue Yuanhan Liu
                   ` (5 preceding siblings ...)
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 6/8] net/virtio: move queue configure code to proper place Yuanhan Liu
@ 2016-11-03 16:09 ` Yuanhan Liu
  2016-11-04  8:44   ` Maxime Coquelin
  2016-11-03 16:10 ` [dpdk-dev] [PATCH 8/8] net/virtio: remove started field Yuanhan Liu
  2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
  8 siblings, 1 reply; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-03 16:09 UTC (permalink / raw)
  To: dev; +Cc: Ilya Maximets

Invoking vtpci_reinit_complete() at port start stage doesn't make any
sense, instead, it should be done at the end of dev init stage.

So move it here.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_ethdev.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 37459e7..c147909 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1282,6 +1282,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 	ret = virtio_alloc_queues(eth_dev);
 	if (ret < 0)
 		return ret;
+	vtpci_reinit_complete(hw);
 
 	if (pci_dev)
 		PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
@@ -1502,8 +1503,6 @@ virtio_dev_start(struct rte_eth_dev *dev)
 	if (hw->started)
 		return 0;
 
-	vtpci_reinit_complete(hw);
-
 	hw->started = 1;
 
 	/*Notify the backend
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 8/8] net/virtio: remove started field
  2016-11-03 16:09 [dpdk-dev] [PATCH 0/8] net/virtio: fix queue reconfigure issue Yuanhan Liu
                   ` (6 preceding siblings ...)
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 7/8] net/virtio: complete init stage at the right place Yuanhan Liu
@ 2016-11-03 16:10 ` Yuanhan Liu
  2016-11-04  8:46   ` Maxime Coquelin
  2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
  8 siblings, 1 reply; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-03 16:10 UTC (permalink / raw)
  To: dev; +Cc: Ilya Maximets

The "hw->started" field was introduced to stop touching queues
on restart. We never touches queues on restart any more, thus
it's safe to remove this flag.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_ethdev.c | 15 ++-------------
 drivers/net/virtio/virtio_pci.h    |  1 -
 2 files changed, 2 insertions(+), 14 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index c147909..1505f67 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -607,7 +607,6 @@ virtio_dev_close(struct rte_eth_dev *dev)
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
 		vtpci_irq_config(hw, VIRTIO_MSI_NO_VECTOR);
 	vtpci_reset(hw);
-	hw->started = 0;
 	virtio_dev_free_mbufs(dev);
 	virtio_free_queues(dev);
 }
@@ -1350,17 +1349,14 @@ static int
 eth_virtio_dev_uninit(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev;
-	struct virtio_hw *hw = eth_dev->data->dev_private;
 
 	PMD_INIT_FUNC_TRACE();
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
 		return -EPERM;
 
-	if (hw->started == 1) {
-		virtio_dev_stop(eth_dev);
-		virtio_dev_close(eth_dev);
-	}
+	virtio_dev_stop(eth_dev);
+	virtio_dev_close(eth_dev);
 	pci_dev = eth_dev->pci_dev;
 
 	eth_dev->dev_ops = NULL;
@@ -1479,7 +1475,6 @@ static int
 virtio_dev_start(struct rte_eth_dev *dev)
 {
 	uint16_t nb_queues, i;
-	struct virtio_hw *hw = dev->data->dev_private;
 	struct virtnet_rx *rxvq;
 	struct virtnet_tx *txvq __rte_unused;
 
@@ -1499,12 +1494,6 @@ virtio_dev_start(struct rte_eth_dev *dev)
 	/* Initialize Link state */
 	virtio_dev_link_update(dev, 0);
 
-	/* On restart after stop do not touch queues */
-	if (hw->started)
-		return 0;
-
-	hw->started = 1;
-
 	/*Notify the backend
 	 *Otherwise the tap backend might already stop its queue due to fullness.
 	 *vhost backend will have no chance to be waked up
diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
index f63f76c..de271bf 100644
--- a/drivers/net/virtio/virtio_pci.h
+++ b/drivers/net/virtio/virtio_pci.h
@@ -252,7 +252,6 @@ struct virtio_hw {
 	uint16_t    vtnet_hdr_size;
 	uint8_t	    vlan_strip;
 	uint8_t	    use_msix;
-	uint8_t     started;
 	uint8_t     modern;
 	uint8_t     use_simple_rxtx;
 	uint8_t     mac_addr[ETHER_ADDR_LEN];
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 1/8] net/virtio: revert "virtio: fix restart"
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 1/8] net/virtio: revert "virtio: fix restart" Yuanhan Liu
@ 2016-11-03 20:36   ` Maxime Coquelin
  2016-11-04  2:00     ` Yuanhan Liu
  2016-11-04  8:10   ` Maxime Coquelin
  1 sibling, 1 reply; 43+ messages in thread
From: Maxime Coquelin @ 2016-11-03 20:36 UTC (permalink / raw)
  To: Yuanhan Liu, dev; +Cc: Ilya Maximets, stable

Hi Yuanhan,

On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
> This reverts commit 9a0615af7746 ("virtio: fix restart"); conflict is
> manually addressed.
>
> Kyle reported an issue with above commit
>
>     qemu-kvm: Guest moved used index from 5 to 1
>
> with following steps,
>
>     1) Start my virtio interfaces
>     2) Send some traffic into/out of the interfaces
>     3) Stop the interfaces
>     4) Start the interfaces
>     5) Send some more traffic
>
> And here are some quotes from Kyle's analysis,
>
>     Prior to the patch, if an interface were stopped then started, without
>     restarting the application, the queues would be left as-is, because
>     hw->started would be set to 1. Now, calling stop sets hw->started to 0,
>     which means the next call to start will "touch the queues". This is the
>     unintended side-effect that causes the problem.

Maybe a good idea to explain what is the problem the revert aims to fix.
It does not seem to be clearly stated in the commit message.

>
> Fixes: 9a0615af7746 ("virtio: fix restart")
>
> Cc: Jianfeng Tan <jianfeng.tan@intel.com>
> Cc: <stable@dpdk.org>
> Reported-by: Kyle Larose <klarose@sandvine.com>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Thanks,
Maxime

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 2/8] net/virtio: simplify queue memzone name
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 2/8] net/virtio: simplify queue memzone name Yuanhan Liu
@ 2016-11-03 20:41   ` Maxime Coquelin
  0 siblings, 0 replies; 43+ messages in thread
From: Maxime Coquelin @ 2016-11-03 20:41 UTC (permalink / raw)
  To: Yuanhan Liu, dev; +Cc: Ilya Maximets



On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
> Instead of setting up a queue memzone name like "port0_rxq0", "port0_txq0",
> it could be simplified a bit to something like "port0_vq0", "port0_vq1" ...
>
> Meanwhile, the code is also simplified a bit.
>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
>  drivers/net/virtio/virtio_ethdev.c | 10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index 5815875..d082df5 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -312,7 +312,6 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  	struct virtnet_tx *txvq = NULL;
>  	struct virtnet_ctl *cvq = NULL;
>  	struct virtqueue *vq;
> -	const char *queue_names[] = {"rvq", "txq", "cvq"};
>  	size_t sz_vq, sz_q = 0, sz_hdr_mz = 0;
>  	void *sw_ring = NULL;
>  	int ret;
> @@ -335,8 +334,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  		return -EINVAL;
>  	}
>
> -	snprintf(vq_name, sizeof(vq_name), "port%d_%s%d",
> -		 dev->data->port_id, queue_names[queue_type], queue_idx);
> +	snprintf(vq_name, sizeof(vq_name), "port%d_vq%d",
> +		 dev->data->port_id, vtpci_queue_idx);
>
>  	sz_vq = RTE_ALIGN_CEIL(sizeof(*vq) +
>  				vq_size * sizeof(struct vq_desc_extra),
> @@ -398,9 +397,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  		     (uint64_t)(uintptr_t)mz->addr);
>
>  	if (sz_hdr_mz) {
> -		snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_%s%d_hdr",
> -			 dev->data->port_id, queue_names[queue_type],
> -			 queue_idx);
> +		snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
> +			 dev->data->port_id, vtpci_queue_idx);
>  		hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
>  						     socket_id, 0,
>  						     RTE_CACHE_LINE_SIZE);
>

Sounds reasonable:
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 3/8] net/virtio: simplify queue allocation
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 3/8] net/virtio: simplify queue allocation Yuanhan Liu
@ 2016-11-03 20:48   ` Maxime Coquelin
  2016-11-04  1:51     ` Yuanhan Liu
  0 siblings, 1 reply; 43+ messages in thread
From: Maxime Coquelin @ 2016-11-03 20:48 UTC (permalink / raw)
  To: Yuanhan Liu, dev; +Cc: Ilya Maximets



On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
> Let rxq/txq/cq be the union field of the virtqueue struct. This would
> simplifies the vq allocation a bit: we don't need calculate the vq_size
> any more based on the queue time.
s/time/type/ ?
>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
>  drivers/net/virtio/virtio_ethdev.c | 18 +++++++-----------
>  drivers/net/virtio/virtqueue.h     |  7 +++++++
>  2 files changed, 14 insertions(+), 11 deletions(-)

Other than that:
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage Yuanhan Liu
@ 2016-11-03 21:11   ` Maxime Coquelin
  2016-11-04  1:50     ` Yuanhan Liu
  2016-11-04  8:25   ` Maxime Coquelin
  2016-11-04 15:21   ` Kevin Traynor
  2 siblings, 1 reply; 43+ messages in thread
From: Maxime Coquelin @ 2016-11-03 21:11 UTC (permalink / raw)
  To: Yuanhan Liu, dev; +Cc: Ilya Maximets



On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
> Queue allocation should be done once, since the queue related info (such
> as vring addreess) will only be informed to the vhost-user backend once
> without virtio device reset.
>
> That means, if you allocate queues again after the vhost-user negotiation,
> the vhost-user backend will not be informed any more. Leading to a state
> that the vring info mismatches between virtio PMD driver and vhost-backend:
> the driver switches to the new address has just been allocated, while the
> vhost-backend still sticks to the old address has been assigned in the init
> stage.
>
> Unfortunately, that is exactly how the virtio driver is coded so far: queue
> allocation is done at queue_setup stage (when rte_eth_tx/rx_queue_setup is
> invoked). This is wrong, because queue_setup can be invoked several times.
> For example,
>
>     $ start_testpmd.sh ... --txq=1 --rxq=1 ...
>     > port stop 0
>     > port config all txq 1 # just trigger the queue_setup callback again
>     > port config all rxq 1
>     > port start 0
>
> The right way to do is allocate the queues in the init stage, so that the
> vring info could be persistent with the vhost-user backend.
>
> Besides that, we should allocate max_queue pairs the device supports, but
> not nr queue pairs firstly configured, to make following case work.
I understand, but how much memory overhead does that represent?

Have you considered performing a device reset when queue number is
changed?

>
>     $ start_testpmd.sh ... --txq=1 --rxq=1 ...
>     > port stop 0
>     > port config all txq 2
>     > port config all rxq 2
>     > port start 0
>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
>  drivers/net/virtio/virtio_ethdev.c | 105 +++++++++++++++++++++++--------------
>  drivers/net/virtio/virtio_ethdev.h |   8 ---
>  drivers/net/virtio/virtio_pci.h    |   2 +
>  drivers/net/virtio/virtio_rxtx.c   |  38 +++++++-------
>  4 files changed, 85 insertions(+), 68 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index 5a2c14b..253bcb5 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -295,13 +295,19 @@ virtio_dev_queue_release(struct virtqueue *vq)
>  	}
>  }
>
> -int virtio_dev_queue_setup(struct rte_eth_dev *dev,
> -			int queue_type,
> -			uint16_t queue_idx,
> -			uint16_t vtpci_queue_idx,
> -			uint16_t nb_desc,
> -			unsigned int socket_id,
> -			void **pvq)
> +static int
> +virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx)
> +{
> +	if (vtpci_queue_idx == hw->max_queue_pairs * 2)
> +		return VTNET_CQ;
> +	else if (vtpci_queue_idx % 2 == 0)
> +		return VTNET_RQ;
> +	else
> +		return VTNET_TQ;
> +}
> +
> +static int
> +virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
>  {
>  	char vq_name[VIRTQUEUE_MAX_NAME_SZ];
>  	char vq_hdr_name[VIRTQUEUE_MAX_NAME_SZ];
> @@ -314,6 +320,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  	struct virtqueue *vq;
>  	size_t sz_hdr_mz = 0;
>  	void *sw_ring = NULL;
> +	int queue_type = virtio_get_queue_type(hw, vtpci_queue_idx);
>  	int ret;
>
>  	PMD_INIT_LOG(DEBUG, "setting up queue: %u", vtpci_queue_idx);
> @@ -351,18 +358,18 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  		sz_hdr_mz = PAGE_SIZE;
>  	}
>
> -	vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, socket_id);
> +	vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE,
> +				SOCKET_ID_ANY);
>  	if (vq == NULL) {
>  		PMD_INIT_LOG(ERR, "can not allocate vq");
>  		return -ENOMEM;
>  	}
> +	hw->vqs[vtpci_queue_idx] = vq;
> +
>  	vq->hw = hw;
>  	vq->vq_queue_index = vtpci_queue_idx;
>  	vq->vq_nentries = vq_size;
> -
> -	if (nb_desc == 0 || nb_desc > vq_size)
> -		nb_desc = vq_size;
> -	vq->vq_free_cnt = nb_desc;
> +	vq->vq_free_cnt = vq_size;
>
>  	/*
>  	 * Reserve a memzone for vring elements
> @@ -372,7 +379,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  	PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d",
>  		     size, vq->vq_ring_size);
>
> -	mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size, socket_id,
> +	mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
> +					 SOCKET_ID_ANY,
>  					 0, VIRTIO_PCI_VRING_ALIGN);
>  	if (mz == NULL) {
>  		if (rte_errno == EEXIST)
> @@ -396,7 +404,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  		snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
>  			 dev->data->port_id, vtpci_queue_idx);
>  		hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
> -						     socket_id, 0,
> +						     SOCKET_ID_ANY, 0,
>  						     RTE_CACHE_LINE_SIZE);
>  		if (hdr_mz == NULL) {
>  			if (rte_errno == EEXIST)
> @@ -413,7 +421,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  			       sizeof(vq->sw_ring[0]);
>
>  		sw_ring = rte_zmalloc_socket("sw_ring", sz_sw,
> -					     RTE_CACHE_LINE_SIZE, socket_id);
> +				RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
>  		if (!sw_ring) {
>  			PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
>  			ret = -ENOMEM;
> @@ -424,19 +432,14 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  		rxvq = &vq->rxq;
>  		rxvq->vq = vq;
>  		rxvq->port_id = dev->data->port_id;
> -		rxvq->queue_id = queue_idx;
>  		rxvq->mz = mz;
> -		*pvq = rxvq;
>  	} else if (queue_type == VTNET_TQ) {
>  		txvq = &vq->txq;
>  		txvq->vq = vq;
>  		txvq->port_id = dev->data->port_id;
> -		txvq->queue_id = queue_idx;
>  		txvq->mz = mz;
>  		txvq->virtio_net_hdr_mz = hdr_mz;
>  		txvq->virtio_net_hdr_mem = hdr_mz->phys_addr;
> -
> -		*pvq = txvq;
>  	} else if (queue_type == VTNET_CQ) {
>  		cvq = &vq->cq;
>  		cvq->vq = vq;
> @@ -444,7 +447,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  		cvq->virtio_net_hdr_mz = hdr_mz;
>  		cvq->virtio_net_hdr_mem = hdr_mz->phys_addr;
>  		memset(cvq->virtio_net_hdr_mz->addr, 0, PAGE_SIZE);
> -		*pvq = cvq;
> +
> +		hw->cvq = cvq;
>  	}
>
>  	/* For virtio_user case (that is when dev->pci_dev is NULL), we use
> @@ -502,23 +506,45 @@ fail_q_alloc:
>  }
>
>  static int
> -virtio_dev_cq_queue_setup(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx,
> -		uint32_t socket_id)
> +virtio_alloc_queues(struct rte_eth_dev *dev)
>  {
> -	struct virtnet_ctl *cvq;
> -	int ret;
>  	struct virtio_hw *hw = dev->data->dev_private;
> +	uint16_t nr_vq = hw->max_queue_pairs * 2;
> +	uint16_t i;
> +	int ret;
>
> -	PMD_INIT_FUNC_TRACE();
> -	ret = virtio_dev_queue_setup(dev, VTNET_CQ, VTNET_SQ_CQ_QUEUE_IDX,
> -			vtpci_queue_idx, 0, socket_id, (void **)&cvq);
> -	if (ret < 0) {
> -		PMD_INIT_LOG(ERR, "control vq initialization failed");
> -		return ret;
> +	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
> +		nr_vq += 1;
> +
> +	hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0);
> +	if (!hw->vqs) {
> +		PMD_INIT_LOG(ERR, "failed to allocate vqs");
> +		return -ENOMEM;
> +	}
> +
> +	for (i = 0; i < nr_vq; i++) {
> +		ret = virtio_init_queue(dev, i);
> +		if (ret < 0)
> +			goto cleanup;
>  	}
>
> -	hw->cvq = cvq;
>  	return 0;
> +
> +cleanup:
> +	/*
> +	 * ctrl queue is the last queue; if we go here, it means the ctrl
> +	 * queue is not allocated, that we can do no cleanup for it here.
> +	 */
> +	while (i > 0) {
> +		i--;
> +		if (i % 2 == 0)
> +			virtio_dev_rx_queue_release(&hw->vqs[i]->rxq);
> +		else
> +			virtio_dev_tx_queue_release(&hw->vqs[i]->txq);
> +	}
> +	rte_free(hw->vqs);
> +
> +	return ret;
>  }
>
>  static void
> @@ -1141,6 +1167,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
>  	struct virtio_net_config *config;
>  	struct virtio_net_config local_config;
>  	struct rte_pci_device *pci_dev = eth_dev->pci_dev;
> +	int ret;
>
>  	/* Reset the device although not necessary at startup */
>  	vtpci_reset(hw);
> @@ -1222,6 +1249,10 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
>  		hw->max_queue_pairs = 1;
>  	}
>
> +	ret = virtio_alloc_queues(eth_dev);
> +	if (ret < 0)
> +		return ret;
> +
>  	if (pci_dev)
>  		PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
>  			eth_dev->data->port_id, pci_dev->id.vendor_id,
> @@ -1390,15 +1421,9 @@ virtio_dev_configure(struct rte_eth_dev *dev)
>  		return -ENOTSUP;
>  	}
>
> -	/* Setup and start control queue */
> -	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
> -		ret = virtio_dev_cq_queue_setup(dev,
> -			hw->max_queue_pairs * 2,
> -			SOCKET_ID_ANY);
> -		if (ret < 0)
> -			return ret;
> +	/* start control queue */
> +	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
>  		virtio_dev_cq_start(dev);
> -	}
>
>  	hw->vlan_strip = rxmode->hw_vlan_strip;
>
> diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
> index de33b32..5db8632 100644
> --- a/drivers/net/virtio/virtio_ethdev.h
> +++ b/drivers/net/virtio/virtio_ethdev.h
> @@ -80,14 +80,6 @@ void virtio_dev_cq_start(struct rte_eth_dev *dev);
>   */
>  void virtio_dev_rxtx_start(struct rte_eth_dev *dev);
>
> -int virtio_dev_queue_setup(struct rte_eth_dev *dev,
> -			int queue_type,
> -			uint16_t queue_idx,
> -			uint16_t vtpci_queue_idx,
> -			uint16_t nb_desc,
> -			unsigned int socket_id,
> -			void **pvq);
> -
>  void virtio_dev_queue_release(struct virtqueue *vq);
>
>  int  virtio_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
> diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
> index 0c5ed31..f63f76c 100644
> --- a/drivers/net/virtio/virtio_pci.h
> +++ b/drivers/net/virtio/virtio_pci.h
> @@ -264,6 +264,8 @@ struct virtio_hw {
>  	struct virtio_net_config *dev_cfg;
>  	const struct virtio_pci_ops *vtpci_ops;
>  	void	    *virtio_user_dev;
> +
> +	struct virtqueue **vqs;
>  };
>
>  /*
> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
> index b4c4aa4..fb703d2 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -530,24 +530,24 @@ int
>  virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  			uint16_t queue_idx,
>  			uint16_t nb_desc,
> -			unsigned int socket_id,
> +			unsigned int socket_id __rte_unused,
>  			__rte_unused const struct rte_eth_rxconf *rx_conf,
>  			struct rte_mempool *mp)
>  {
>  	uint16_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_RQ_QUEUE_IDX;
> +	struct virtio_hw *hw = dev->data->dev_private;
> +	struct virtqueue *vq = hw->vqs[vtpci_queue_idx];
>  	struct virtnet_rx *rxvq;
> -	int ret;
>
>  	PMD_INIT_FUNC_TRACE();
> -	ret = virtio_dev_queue_setup(dev, VTNET_RQ, queue_idx, vtpci_queue_idx,
> -			nb_desc, socket_id, (void **)&rxvq);
> -	if (ret < 0) {
> -		PMD_INIT_LOG(ERR, "rvq initialization failed");
> -		return ret;
> -	}
>
> -	/* Create mempool for rx mbuf allocation */
> +	if (nb_desc == 0 || nb_desc > vq->vq_nentries)
> +		nb_desc = vq->vq_nentries;
> +	vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
> +
> +	rxvq = &vq->rxq;
>  	rxvq->mpool = mp;
> +	rxvq->queue_id = queue_idx;
>
>  	dev->data->rx_queues[queue_idx] = rxvq;
>
> @@ -613,27 +613,25 @@ int
>  virtio_dev_tx_queue_setup(struct rte_eth_dev *dev,
>  			uint16_t queue_idx,
>  			uint16_t nb_desc,
> -			unsigned int socket_id,
> +			unsigned int socket_id __rte_unused,
>  			const struct rte_eth_txconf *tx_conf)
>  {
>  	uint8_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_TQ_QUEUE_IDX;
> +	struct virtio_hw *hw = dev->data->dev_private;
> +	struct virtqueue *vq = hw->vqs[vtpci_queue_idx];
>  	struct virtnet_tx *txvq;
> -	struct virtqueue *vq;
>  	uint16_t tx_free_thresh;
> -	int ret;
>
>  	PMD_INIT_FUNC_TRACE();
>
> -
>  	virtio_update_rxtx_handler(dev, tx_conf);
>
> -	ret = virtio_dev_queue_setup(dev, VTNET_TQ, queue_idx, vtpci_queue_idx,
> -			nb_desc, socket_id, (void **)&txvq);
> -	if (ret < 0) {
> -		PMD_INIT_LOG(ERR, "tvq initialization failed");
> -		return ret;
> -	}
> -	vq = txvq->vq;
> +	if (nb_desc == 0 || nb_desc > vq->vq_nentries)
> +		nb_desc = vq->vq_nentries;
> +	vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
> +
> +	txvq = &vq->txq;
> +	txvq->queue_id = queue_idx;
>
>  	tx_free_thresh = tx_conf->tx_free_thresh;
>  	if (tx_free_thresh == 0)
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage
  2016-11-03 21:11   ` Maxime Coquelin
@ 2016-11-04  1:50     ` Yuanhan Liu
  2016-11-04  8:08       ` Maxime Coquelin
  0 siblings, 1 reply; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-04  1:50 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: dev, Ilya Maximets

On Thu, Nov 03, 2016 at 10:11:43PM +0100, Maxime Coquelin wrote:
> 
> 
> On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
> >Queue allocation should be done once, since the queue related info (such
> >as vring addreess) will only be informed to the vhost-user backend once
> >without virtio device reset.
> >
> >That means, if you allocate queues again after the vhost-user negotiation,
> >the vhost-user backend will not be informed any more. Leading to a state
> >that the vring info mismatches between virtio PMD driver and vhost-backend:
> >the driver switches to the new address has just been allocated, while the
> >vhost-backend still sticks to the old address has been assigned in the init
> >stage.
> >
> >Unfortunately, that is exactly how the virtio driver is coded so far: queue
> >allocation is done at queue_setup stage (when rte_eth_tx/rx_queue_setup is
> >invoked). This is wrong, because queue_setup can be invoked several times.
> >For example,
> >
> >    $ start_testpmd.sh ... --txq=1 --rxq=1 ...
> >    > port stop 0
> >    > port config all txq 1 # just trigger the queue_setup callback again
> >    > port config all rxq 1
> >    > port start 0
> >
> >The right way to do is allocate the queues in the init stage, so that the
> >vring info could be persistent with the vhost-user backend.
> >
> >Besides that, we should allocate max_queue pairs the device supports, but
> >not nr queue pairs firstly configured, to make following case work.
> I understand, but how much memory overhead does that represent?

We are allocating max queue pairs the device supports, but not the
virtio-net spec supports, which, as you stated, would be too much.

I don't know the typical value of the queue pairs being used in
production, but normally, I would assume it be small, something
like 2, or 4. It's 1 by default after all.

So I think it will not be an issue.

> Have you considered performing a device reset when queue number is
> changed?

Not a good idea and clean solution to me.

	--yliu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 3/8] net/virtio: simplify queue allocation
  2016-11-03 20:48   ` Maxime Coquelin
@ 2016-11-04  1:51     ` Yuanhan Liu
  0 siblings, 0 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-04  1:51 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: dev, Ilya Maximets

On Thu, Nov 03, 2016 at 09:48:03PM +0100, Maxime Coquelin wrote:
> 
> 
> On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
> >Let rxq/txq/cq be the union field of the virtqueue struct. This would
> >simplifies the vq allocation a bit: we don't need calculate the vq_size
> >any more based on the queue time.
> s/time/type/ ?

Oops... will fix it.

Thanks.

	--yliu
> >
> >Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> >---
> > drivers/net/virtio/virtio_ethdev.c | 18 +++++++-----------
> > drivers/net/virtio/virtqueue.h     |  7 +++++++
> > 2 files changed, 14 insertions(+), 11 deletions(-)
> 
> Other than that:
> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> 
> Thanks,
> Maxime

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 1/8] net/virtio: revert "virtio: fix restart"
  2016-11-03 20:36   ` Maxime Coquelin
@ 2016-11-04  2:00     ` Yuanhan Liu
  2016-11-04  8:09       ` Maxime Coquelin
  0 siblings, 1 reply; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-04  2:00 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: dev, stable, Ilya Maximets

On Thu, Nov 03, 2016 at 09:36:52PM +0100, Maxime Coquelin wrote:
> Hi Yuanhan,
> 
> On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
> >This reverts commit 9a0615af7746 ("virtio: fix restart"); conflict is
> >manually addressed.
> >
> >Kyle reported an issue with above commit
> >
> >    qemu-kvm: Guest moved used index from 5 to 1
> >
> >with following steps,
> >
> >    1) Start my virtio interfaces
> >    2) Send some traffic into/out of the interfaces
> >    3) Stop the interfaces
> >    4) Start the interfaces
> >    5) Send some more traffic
> >
> >And here are some quotes from Kyle's analysis,
> >
> >    Prior to the patch, if an interface were stopped then started, without
> >    restarting the application, the queues would be left as-is, because
> >    hw->started would be set to 1. Now, calling stop sets hw->started to 0,
> >    which means the next call to start will "touch the queues". This is the
> >    unintended side-effect that causes the problem.
> 
> Maybe a good idea to explain what is the problem the revert aims to fix.

It aims to fix the issue, by "not touching the queues" on restart.

> It does not seem to be clearly stated in the commit message.

I was thinking the quote from Kyle is enough. How about following supplement:

    We should not touch the queues once the init is done, otherwise, the
    vring state of virtio PMD driver and vhost-user would be inconsistent,
    leading some issue like above.

    Thus this patch is reverted.

Better now?

	--yliu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage
  2016-11-04  1:50     ` Yuanhan Liu
@ 2016-11-04  8:08       ` Maxime Coquelin
  0 siblings, 0 replies; 43+ messages in thread
From: Maxime Coquelin @ 2016-11-04  8:08 UTC (permalink / raw)
  To: Yuanhan Liu; +Cc: dev, Ilya Maximets



On 11/04/2016 02:50 AM, Yuanhan Liu wrote:
> On Thu, Nov 03, 2016 at 10:11:43PM +0100, Maxime Coquelin wrote:
>>
>>
>> On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
>>> Queue allocation should be done once, since the queue related info (such
>>> as vring addreess) will only be informed to the vhost-user backend once
>>> without virtio device reset.
>>>
>>> That means, if you allocate queues again after the vhost-user negotiation,
>>> the vhost-user backend will not be informed any more. Leading to a state
>>> that the vring info mismatches between virtio PMD driver and vhost-backend:
>>> the driver switches to the new address has just been allocated, while the
>>> vhost-backend still sticks to the old address has been assigned in the init
>>> stage.
>>>
>>> Unfortunately, that is exactly how the virtio driver is coded so far: queue
>>> allocation is done at queue_setup stage (when rte_eth_tx/rx_queue_setup is
>>> invoked). This is wrong, because queue_setup can be invoked several times.
>>> For example,
>>>
>>>    $ start_testpmd.sh ... --txq=1 --rxq=1 ...
>>>    > port stop 0
>>>    > port config all txq 1 # just trigger the queue_setup callback again
>>>    > port config all rxq 1
>>>    > port start 0
>>>
>>> The right way to do is allocate the queues in the init stage, so that the
>>> vring info could be persistent with the vhost-user backend.
>>>
>>> Besides that, we should allocate max_queue pairs the device supports, but
>>> not nr queue pairs firstly configured, to make following case work.
>> I understand, but how much memory overhead does that represent?
>
> We are allocating max queue pairs the device supports, but not the
> virtio-net spec supports, which, as you stated, would be too much.
Oh, ok. In this case this is good to me.

Thanks,
Maxime

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 1/8] net/virtio: revert "virtio: fix restart"
  2016-11-04  2:00     ` Yuanhan Liu
@ 2016-11-04  8:09       ` Maxime Coquelin
  2016-11-04 14:28         ` Yuanhan Liu
  0 siblings, 1 reply; 43+ messages in thread
From: Maxime Coquelin @ 2016-11-04  8:09 UTC (permalink / raw)
  To: Yuanhan Liu; +Cc: dev, stable, Ilya Maximets



On 11/04/2016 03:00 AM, Yuanhan Liu wrote:
> On Thu, Nov 03, 2016 at 09:36:52PM +0100, Maxime Coquelin wrote:
>> Hi Yuanhan,
>>
>> On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
>>> This reverts commit 9a0615af7746 ("virtio: fix restart"); conflict is
>>> manually addressed.
>>>
>>> Kyle reported an issue with above commit
>>>
>>>    qemu-kvm: Guest moved used index from 5 to 1
>>>
>>> with following steps,
>>>
>>>    1) Start my virtio interfaces
>>>    2) Send some traffic into/out of the interfaces
>>>    3) Stop the interfaces
>>>    4) Start the interfaces
>>>    5) Send some more traffic
>>>
>>> And here are some quotes from Kyle's analysis,
>>>
>>>    Prior to the patch, if an interface were stopped then started, without
>>>    restarting the application, the queues would be left as-is, because
>>>    hw->started would be set to 1. Now, calling stop sets hw->started to 0,
>>>    which means the next call to start will "touch the queues". This is the
>>>    unintended side-effect that causes the problem.
>>
>> Maybe a good idea to explain what is the problem the revert aims to fix.
>
> It aims to fix the issue, by "not touching the queues" on restart.
>
>> It does not seem to be clearly stated in the commit message.
>
> I was thinking the quote from Kyle is enough. How about following supplement:
>
>     We should not touch the queues once the init is done, otherwise, the
>     vring state of virtio PMD driver and vhost-user would be inconsistent,
>     leading some issue like above.
>
>     Thus this patch is reverted.
>
> Better now?
Yes, this is much clearer from my PoV.

Thanks!
Maxime

>
> 	--yliu
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 1/8] net/virtio: revert "virtio: fix restart"
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 1/8] net/virtio: revert "virtio: fix restart" Yuanhan Liu
  2016-11-03 20:36   ` Maxime Coquelin
@ 2016-11-04  8:10   ` Maxime Coquelin
  1 sibling, 0 replies; 43+ messages in thread
From: Maxime Coquelin @ 2016-11-04  8:10 UTC (permalink / raw)
  To: Yuanhan Liu, dev; +Cc: stable, Ilya Maximets



On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
> This reverts commit 9a0615af7746 ("virtio: fix restart"); conflict is
> manually addressed.
>
> Kyle reported an issue with above commit
>
>     qemu-kvm: Guest moved used index from 5 to 1
>
> with following steps,
>
>     1) Start my virtio interfaces
>     2) Send some traffic into/out of the interfaces
>     3) Stop the interfaces
>     4) Start the interfaces
>     5) Send some more traffic
>
> And here are some quotes from Kyle's analysis,
>
>     Prior to the patch, if an interface were stopped then started, without
>     restarting the application, the queues would be left as-is, because
>     hw->started would be set to 1. Now, calling stop sets hw->started to 0,
>     which means the next call to start will "touch the queues". This is the
>     unintended side-effect that causes the problem.
>
> Fixes: 9a0615af7746 ("virtio: fix restart")
>
> Cc: Jianfeng Tan <jianfeng.tan@intel.com>
> Cc: <stable@dpdk.org>
> Reported-by: Kyle Larose <klarose@sandvine.com>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage Yuanhan Liu
  2016-11-03 21:11   ` Maxime Coquelin
@ 2016-11-04  8:25   ` Maxime Coquelin
  2016-11-04 15:21   ` Kevin Traynor
  2 siblings, 0 replies; 43+ messages in thread
From: Maxime Coquelin @ 2016-11-04  8:25 UTC (permalink / raw)
  To: Yuanhan Liu, dev; +Cc: Ilya Maximets



On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
> Queue allocation should be done once, since the queue related info (such
> as vring addreess) will only be informed to the vhost-user backend once
> without virtio device reset.
>
> That means, if you allocate queues again after the vhost-user negotiation,
> the vhost-user backend will not be informed any more. Leading to a state
> that the vring info mismatches between virtio PMD driver and vhost-backend:
> the driver switches to the new address has just been allocated, while the
> vhost-backend still sticks to the old address has been assigned in the init
> stage.
>
> Unfortunately, that is exactly how the virtio driver is coded so far: queue
> allocation is done at queue_setup stage (when rte_eth_tx/rx_queue_setup is
> invoked). This is wrong, because queue_setup can be invoked several times.
> For example,
>
>     $ start_testpmd.sh ... --txq=1 --rxq=1 ...
>     > port stop 0
>     > port config all txq 1 # just trigger the queue_setup callback again
>     > port config all rxq 1
>     > port start 0
>
> The right way to do is allocate the queues in the init stage, so that the
> vring info could be persistent with the vhost-user backend.
>
> Besides that, we should allocate max_queue pairs the device supports, but
> not nr queue pairs firstly configured, to make following case work.
>
>     $ start_testpmd.sh ... --txq=1 --rxq=1 ...
>     > port stop 0
>     > port config all txq 2
>     > port config all rxq 2
>     > port start 0
>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
>  drivers/net/virtio/virtio_ethdev.c | 105 +++++++++++++++++++++++--------------
>  drivers/net/virtio/virtio_ethdev.h |   8 ---
>  drivers/net/virtio/virtio_pci.h    |   2 +
>  drivers/net/virtio/virtio_rxtx.c   |  38 +++++++-------
>  4 files changed, 85 insertions(+), 68 deletions(-)
>

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 5/8] net/virtio: initiate vring at init stage
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 5/8] net/virtio: initiate vring " Yuanhan Liu
@ 2016-11-04  8:34   ` Maxime Coquelin
  0 siblings, 0 replies; 43+ messages in thread
From: Maxime Coquelin @ 2016-11-04  8:34 UTC (permalink / raw)
  To: Yuanhan Liu, dev; +Cc: Ilya Maximets



On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
> virtio_dev_vring_start() is actually doing the vring initiation job.
> And the vring initiation job should be done at the dev init stage, as
> stated with great details in former commit.
>
> So move it there, and rename it to virtio_init_vring().
>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
>  drivers/net/virtio/virtio_ethdev.c | 32 +++++++++++++++++++++++++++++++-
>  drivers/net/virtio/virtio_rxtx.c   | 32 --------------------------------
>  2 files changed, 31 insertions(+), 33 deletions(-)


Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 6/8] net/virtio: move queue configure code to proper place
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 6/8] net/virtio: move queue configure code to proper place Yuanhan Liu
@ 2016-11-04  8:39   ` Maxime Coquelin
  0 siblings, 0 replies; 43+ messages in thread
From: Maxime Coquelin @ 2016-11-04  8:39 UTC (permalink / raw)
  To: Yuanhan Liu, dev; +Cc: Ilya Maximets



On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
> The only piece of code of virtio_dev_rxtx_start() is actually doing
> queue configure/setup work. So, move it to corresponding queue_setup
> callback.
>
> Once that is done, virtio_dev_rxtx_start() becomes an empty function,
> thus it's being removed.
>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
>  drivers/net/virtio/virtio_ethdev.c |   2 -
>  drivers/net/virtio/virtio_ethdev.h |   2 -
>  drivers/net/virtio/virtio_rxtx.c   | 187 ++++++++++++++++---------------------
>  3 files changed, 79 insertions(+), 112 deletions(-)

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 7/8] net/virtio: complete init stage at the right place
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 7/8] net/virtio: complete init stage at the right place Yuanhan Liu
@ 2016-11-04  8:44   ` Maxime Coquelin
  0 siblings, 0 replies; 43+ messages in thread
From: Maxime Coquelin @ 2016-11-04  8:44 UTC (permalink / raw)
  To: Yuanhan Liu, dev; +Cc: Ilya Maximets



On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
> Invoking vtpci_reinit_complete() at port start stage doesn't make any
> sense, instead, it should be done at the end of dev init stage.
>
> So move it here.
>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
>  drivers/net/virtio/virtio_ethdev.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)

Makes sense:
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 8/8] net/virtio: remove started field
  2016-11-03 16:10 ` [dpdk-dev] [PATCH 8/8] net/virtio: remove started field Yuanhan Liu
@ 2016-11-04  8:46   ` Maxime Coquelin
  0 siblings, 0 replies; 43+ messages in thread
From: Maxime Coquelin @ 2016-11-04  8:46 UTC (permalink / raw)
  To: Yuanhan Liu, dev; +Cc: Ilya Maximets



On 11/03/2016 05:10 PM, Yuanhan Liu wrote:
> The "hw->started" field was introduced to stop touching queues
> on restart. We never touches queues on restart any more, thus
> it's safe to remove this flag.
>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
>  drivers/net/virtio/virtio_ethdev.c | 15 ++-------------
>  drivers/net/virtio/virtio_pci.h    |  1 -
>  2 files changed, 2 insertions(+), 14 deletions(-)

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 1/8] net/virtio: revert "virtio: fix restart"
  2016-11-04  8:09       ` Maxime Coquelin
@ 2016-11-04 14:28         ` Yuanhan Liu
  0 siblings, 0 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-04 14:28 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: dev, stable, Ilya Maximets

On Fri, Nov 04, 2016 at 09:09:22AM +0100, Maxime Coquelin wrote:
> 
> 
> On 11/04/2016 03:00 AM, Yuanhan Liu wrote:
> >On Thu, Nov 03, 2016 at 09:36:52PM +0100, Maxime Coquelin wrote:
> >>Hi Yuanhan,
> >>
> >>On 11/03/2016 05:09 PM, Yuanhan Liu wrote:
> >>>This reverts commit 9a0615af7746 ("virtio: fix restart"); conflict is
> >>>manually addressed.
> >>>
> >>>Kyle reported an issue with above commit
> >>>
> >>>   qemu-kvm: Guest moved used index from 5 to 1
> >>>
> >>>with following steps,
> >>>
> >>>   1) Start my virtio interfaces
> >>>   2) Send some traffic into/out of the interfaces
> >>>   3) Stop the interfaces
> >>>   4) Start the interfaces
> >>>   5) Send some more traffic
> >>>
> >>>And here are some quotes from Kyle's analysis,
> >>>
> >>>   Prior to the patch, if an interface were stopped then started, without
> >>>   restarting the application, the queues would be left as-is, because
> >>>   hw->started would be set to 1. Now, calling stop sets hw->started to 0,
> >>>   which means the next call to start will "touch the queues". This is the
> >>>   unintended side-effect that causes the problem.
> >>
> >>Maybe a good idea to explain what is the problem the revert aims to fix.
> >
> >It aims to fix the issue, by "not touching the queues" on restart.
> >
> >>It does not seem to be clearly stated in the commit message.
> >
> >I was thinking the quote from Kyle is enough. How about following supplement:
> >
> >    We should not touch the queues once the init is done, otherwise, the
> >    vring state of virtio PMD driver and vhost-user would be inconsistent,
> >    leading some issue like above.
> >
> >    Thus this patch is reverted.
> >
> >Better now?
> Yes, this is much clearer from my PoV.

Fixed, and series applied to dpdk-next-virtio.

Thanks for the review!

	--yliu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage
  2016-11-03 16:09 ` [dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage Yuanhan Liu
  2016-11-03 21:11   ` Maxime Coquelin
  2016-11-04  8:25   ` Maxime Coquelin
@ 2016-11-04 15:21   ` Kevin Traynor
  2016-11-04 20:30     ` Kevin Traynor
  2 siblings, 1 reply; 43+ messages in thread
From: Kevin Traynor @ 2016-11-04 15:21 UTC (permalink / raw)
  To: Yuanhan Liu, dev
  Cc: Thomas Monjalon, Tan Jianfeng, Ilya Maximets, Kyle Larose

On 11/03/2016 04:09 PM, Yuanhan Liu wrote:
> Queue allocation should be done once, since the queue related info (such
> as vring addreess) will only be informed to the vhost-user backend once
> without virtio device reset.
> 
> That means, if you allocate queues again after the vhost-user negotiation,
> the vhost-user backend will not be informed any more. Leading to a state
> that the vring info mismatches between virtio PMD driver and vhost-backend:
> the driver switches to the new address has just been allocated, while the
> vhost-backend still sticks to the old address has been assigned in the init
> stage.
> 
> Unfortunately, that is exactly how the virtio driver is coded so far: queue
> allocation is done at queue_setup stage (when rte_eth_tx/rx_queue_setup is
> invoked). This is wrong, because queue_setup can be invoked several times.
> For example,
> 
>     $ start_testpmd.sh ... --txq=1 --rxq=1 ...
>     > port stop 0
>     > port config all txq 1 # just trigger the queue_setup callback again
>     > port config all rxq 1
>     > port start 0
> 
> The right way to do is allocate the queues in the init stage, so that the
> vring info could be persistent with the vhost-user backend.
> 
> Besides that, we should allocate max_queue pairs the device supports, but
> not nr queue pairs firstly configured, to make following case work.
> 
>     $ start_testpmd.sh ... --txq=1 --rxq=1 ...
>     > port stop 0
>     > port config all txq 2
>     > port config all rxq 2
>     > port start 0

hi Yuanhan, firstly - thanks for this patchset. It is certainly needed
to fix the silent failure after increase num q's.

I tried a few tests and I'm seeing an issue. I can stop the port,
increase the number of queues and traffic is ok, but if I try to
decrease the number of queues it hangs on port start. I'm running head
of the master with your patches in the guest and 16.07 in the host.

$ testpmd -c 0x5f -n 4 --socket-mem 1024 -- --burst=64 -i
--disable-hw-vlan --rxq=2 --txq=2 --rxd=256 --txd=256 --forward-mode=io
> port stop all
> port config all rxq 1
> port config all txq 1
> port start all
Configuring Port 0 (socket 0)
(hang here)

I've tested a few different scenarios and anytime the queues are
decreased from the previous number the hang occurs.

I can debug further but wanted to report early as maybe issue is an
obvious one?

thanks,
Kevin.

> 
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
>  drivers/net/virtio/virtio_ethdev.c | 105 +++++++++++++++++++++++--------------
>  drivers/net/virtio/virtio_ethdev.h |   8 ---
>  drivers/net/virtio/virtio_pci.h    |   2 +
>  drivers/net/virtio/virtio_rxtx.c   |  38 +++++++-------
>  4 files changed, 85 insertions(+), 68 deletions(-)
> 
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index 5a2c14b..253bcb5 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -295,13 +295,19 @@ virtio_dev_queue_release(struct virtqueue *vq)
>  	}
>  }
>  
> -int virtio_dev_queue_setup(struct rte_eth_dev *dev,
> -			int queue_type,
> -			uint16_t queue_idx,
> -			uint16_t vtpci_queue_idx,
> -			uint16_t nb_desc,
> -			unsigned int socket_id,
> -			void **pvq)
> +static int
> +virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx)
> +{
> +	if (vtpci_queue_idx == hw->max_queue_pairs * 2)
> +		return VTNET_CQ;
> +	else if (vtpci_queue_idx % 2 == 0)
> +		return VTNET_RQ;
> +	else
> +		return VTNET_TQ;
> +}
> +
> +static int
> +virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
>  {
>  	char vq_name[VIRTQUEUE_MAX_NAME_SZ];
>  	char vq_hdr_name[VIRTQUEUE_MAX_NAME_SZ];
> @@ -314,6 +320,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  	struct virtqueue *vq;
>  	size_t sz_hdr_mz = 0;
>  	void *sw_ring = NULL;
> +	int queue_type = virtio_get_queue_type(hw, vtpci_queue_idx);
>  	int ret;
>  
>  	PMD_INIT_LOG(DEBUG, "setting up queue: %u", vtpci_queue_idx);
> @@ -351,18 +358,18 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  		sz_hdr_mz = PAGE_SIZE;
>  	}
>  
> -	vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, socket_id);
> +	vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE,
> +				SOCKET_ID_ANY);
>  	if (vq == NULL) {
>  		PMD_INIT_LOG(ERR, "can not allocate vq");
>  		return -ENOMEM;
>  	}
> +	hw->vqs[vtpci_queue_idx] = vq;
> +
>  	vq->hw = hw;
>  	vq->vq_queue_index = vtpci_queue_idx;
>  	vq->vq_nentries = vq_size;
> -
> -	if (nb_desc == 0 || nb_desc > vq_size)
> -		nb_desc = vq_size;
> -	vq->vq_free_cnt = nb_desc;
> +	vq->vq_free_cnt = vq_size;
>  
>  	/*
>  	 * Reserve a memzone for vring elements
> @@ -372,7 +379,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  	PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d",
>  		     size, vq->vq_ring_size);
>  
> -	mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size, socket_id,
> +	mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
> +					 SOCKET_ID_ANY,
>  					 0, VIRTIO_PCI_VRING_ALIGN);
>  	if (mz == NULL) {
>  		if (rte_errno == EEXIST)
> @@ -396,7 +404,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  		snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
>  			 dev->data->port_id, vtpci_queue_idx);
>  		hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
> -						     socket_id, 0,
> +						     SOCKET_ID_ANY, 0,
>  						     RTE_CACHE_LINE_SIZE);
>  		if (hdr_mz == NULL) {
>  			if (rte_errno == EEXIST)
> @@ -413,7 +421,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  			       sizeof(vq->sw_ring[0]);
>  
>  		sw_ring = rte_zmalloc_socket("sw_ring", sz_sw,
> -					     RTE_CACHE_LINE_SIZE, socket_id);
> +				RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
>  		if (!sw_ring) {
>  			PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
>  			ret = -ENOMEM;
> @@ -424,19 +432,14 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  		rxvq = &vq->rxq;
>  		rxvq->vq = vq;
>  		rxvq->port_id = dev->data->port_id;
> -		rxvq->queue_id = queue_idx;
>  		rxvq->mz = mz;
> -		*pvq = rxvq;
>  	} else if (queue_type == VTNET_TQ) {
>  		txvq = &vq->txq;
>  		txvq->vq = vq;
>  		txvq->port_id = dev->data->port_id;
> -		txvq->queue_id = queue_idx;
>  		txvq->mz = mz;
>  		txvq->virtio_net_hdr_mz = hdr_mz;
>  		txvq->virtio_net_hdr_mem = hdr_mz->phys_addr;
> -
> -		*pvq = txvq;
>  	} else if (queue_type == VTNET_CQ) {
>  		cvq = &vq->cq;
>  		cvq->vq = vq;
> @@ -444,7 +447,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>  		cvq->virtio_net_hdr_mz = hdr_mz;
>  		cvq->virtio_net_hdr_mem = hdr_mz->phys_addr;
>  		memset(cvq->virtio_net_hdr_mz->addr, 0, PAGE_SIZE);
> -		*pvq = cvq;
> +
> +		hw->cvq = cvq;
>  	}
>  
>  	/* For virtio_user case (that is when dev->pci_dev is NULL), we use
> @@ -502,23 +506,45 @@ fail_q_alloc:
>  }
>  
>  static int
> -virtio_dev_cq_queue_setup(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx,
> -		uint32_t socket_id)
> +virtio_alloc_queues(struct rte_eth_dev *dev)
>  {
> -	struct virtnet_ctl *cvq;
> -	int ret;
>  	struct virtio_hw *hw = dev->data->dev_private;
> +	uint16_t nr_vq = hw->max_queue_pairs * 2;
> +	uint16_t i;
> +	int ret;
>  
> -	PMD_INIT_FUNC_TRACE();
> -	ret = virtio_dev_queue_setup(dev, VTNET_CQ, VTNET_SQ_CQ_QUEUE_IDX,
> -			vtpci_queue_idx, 0, socket_id, (void **)&cvq);
> -	if (ret < 0) {
> -		PMD_INIT_LOG(ERR, "control vq initialization failed");
> -		return ret;
> +	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
> +		nr_vq += 1;
> +
> +	hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0);
> +	if (!hw->vqs) {
> +		PMD_INIT_LOG(ERR, "failed to allocate vqs");
> +		return -ENOMEM;
> +	}
> +
> +	for (i = 0; i < nr_vq; i++) {
> +		ret = virtio_init_queue(dev, i);
> +		if (ret < 0)
> +			goto cleanup;
>  	}
>  
> -	hw->cvq = cvq;
>  	return 0;
> +
> +cleanup:
> +	/*
> +	 * ctrl queue is the last queue; if we go here, it means the ctrl
> +	 * queue is not allocated, that we can do no cleanup for it here.
> +	 */
> +	while (i > 0) {
> +		i--;
> +		if (i % 2 == 0)
> +			virtio_dev_rx_queue_release(&hw->vqs[i]->rxq);
> +		else
> +			virtio_dev_tx_queue_release(&hw->vqs[i]->txq);
> +	}
> +	rte_free(hw->vqs);
> +
> +	return ret;
>  }
>  
>  static void
> @@ -1141,6 +1167,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
>  	struct virtio_net_config *config;
>  	struct virtio_net_config local_config;
>  	struct rte_pci_device *pci_dev = eth_dev->pci_dev;
> +	int ret;
>  
>  	/* Reset the device although not necessary at startup */
>  	vtpci_reset(hw);
> @@ -1222,6 +1249,10 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
>  		hw->max_queue_pairs = 1;
>  	}
>  
> +	ret = virtio_alloc_queues(eth_dev);
> +	if (ret < 0)
> +		return ret;
> +
>  	if (pci_dev)
>  		PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
>  			eth_dev->data->port_id, pci_dev->id.vendor_id,
> @@ -1390,15 +1421,9 @@ virtio_dev_configure(struct rte_eth_dev *dev)
>  		return -ENOTSUP;
>  	}
>  
> -	/* Setup and start control queue */
> -	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
> -		ret = virtio_dev_cq_queue_setup(dev,
> -			hw->max_queue_pairs * 2,
> -			SOCKET_ID_ANY);
> -		if (ret < 0)
> -			return ret;
> +	/* start control queue */
> +	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
>  		virtio_dev_cq_start(dev);
> -	}
>  
>  	hw->vlan_strip = rxmode->hw_vlan_strip;
>  
> diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
> index de33b32..5db8632 100644
> --- a/drivers/net/virtio/virtio_ethdev.h
> +++ b/drivers/net/virtio/virtio_ethdev.h
> @@ -80,14 +80,6 @@ void virtio_dev_cq_start(struct rte_eth_dev *dev);
>   */
>  void virtio_dev_rxtx_start(struct rte_eth_dev *dev);
>  
> -int virtio_dev_queue_setup(struct rte_eth_dev *dev,
> -			int queue_type,
> -			uint16_t queue_idx,
> -			uint16_t vtpci_queue_idx,
> -			uint16_t nb_desc,
> -			unsigned int socket_id,
> -			void **pvq);
> -
>  void virtio_dev_queue_release(struct virtqueue *vq);
>  
>  int  virtio_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
> diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
> index 0c5ed31..f63f76c 100644
> --- a/drivers/net/virtio/virtio_pci.h
> +++ b/drivers/net/virtio/virtio_pci.h
> @@ -264,6 +264,8 @@ struct virtio_hw {
>  	struct virtio_net_config *dev_cfg;
>  	const struct virtio_pci_ops *vtpci_ops;
>  	void	    *virtio_user_dev;
> +
> +	struct virtqueue **vqs;
>  };
>  
>  /*
> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
> index b4c4aa4..fb703d2 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -530,24 +530,24 @@ int
>  virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  			uint16_t queue_idx,
>  			uint16_t nb_desc,
> -			unsigned int socket_id,
> +			unsigned int socket_id __rte_unused,
>  			__rte_unused const struct rte_eth_rxconf *rx_conf,
>  			struct rte_mempool *mp)
>  {
>  	uint16_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_RQ_QUEUE_IDX;
> +	struct virtio_hw *hw = dev->data->dev_private;
> +	struct virtqueue *vq = hw->vqs[vtpci_queue_idx];
>  	struct virtnet_rx *rxvq;
> -	int ret;
>  
>  	PMD_INIT_FUNC_TRACE();
> -	ret = virtio_dev_queue_setup(dev, VTNET_RQ, queue_idx, vtpci_queue_idx,
> -			nb_desc, socket_id, (void **)&rxvq);
> -	if (ret < 0) {
> -		PMD_INIT_LOG(ERR, "rvq initialization failed");
> -		return ret;
> -	}
>  
> -	/* Create mempool for rx mbuf allocation */
> +	if (nb_desc == 0 || nb_desc > vq->vq_nentries)
> +		nb_desc = vq->vq_nentries;
> +	vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
> +
> +	rxvq = &vq->rxq;
>  	rxvq->mpool = mp;
> +	rxvq->queue_id = queue_idx;
>  
>  	dev->data->rx_queues[queue_idx] = rxvq;
>  
> @@ -613,27 +613,25 @@ int
>  virtio_dev_tx_queue_setup(struct rte_eth_dev *dev,
>  			uint16_t queue_idx,
>  			uint16_t nb_desc,
> -			unsigned int socket_id,
> +			unsigned int socket_id __rte_unused,
>  			const struct rte_eth_txconf *tx_conf)
>  {
>  	uint8_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_TQ_QUEUE_IDX;
> +	struct virtio_hw *hw = dev->data->dev_private;
> +	struct virtqueue *vq = hw->vqs[vtpci_queue_idx];
>  	struct virtnet_tx *txvq;
> -	struct virtqueue *vq;
>  	uint16_t tx_free_thresh;
> -	int ret;
>  
>  	PMD_INIT_FUNC_TRACE();
>  
> -
>  	virtio_update_rxtx_handler(dev, tx_conf);
>  
> -	ret = virtio_dev_queue_setup(dev, VTNET_TQ, queue_idx, vtpci_queue_idx,
> -			nb_desc, socket_id, (void **)&txvq);
> -	if (ret < 0) {
> -		PMD_INIT_LOG(ERR, "tvq initialization failed");
> -		return ret;
> -	}
> -	vq = txvq->vq;
> +	if (nb_desc == 0 || nb_desc > vq->vq_nentries)
> +		nb_desc = vq->vq_nentries;
> +	vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
> +
> +	txvq = &vq->txq;
> +	txvq->queue_id = queue_idx;
>  
>  	tx_free_thresh = tx_conf->tx_free_thresh;
>  	if (tx_free_thresh == 0)
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage
  2016-11-04 15:21   ` Kevin Traynor
@ 2016-11-04 20:30     ` Kevin Traynor
  2016-11-05  6:15       ` Yuanhan Liu
  0 siblings, 1 reply; 43+ messages in thread
From: Kevin Traynor @ 2016-11-04 20:30 UTC (permalink / raw)
  To: Yuanhan Liu, dev
  Cc: Thomas Monjalon, Tan Jianfeng, Ilya Maximets, Kyle Larose

On 11/04/2016 03:21 PM, Kevin Traynor wrote:
> On 11/03/2016 04:09 PM, Yuanhan Liu wrote:
>> Queue allocation should be done once, since the queue related info (such
>> as vring addreess) will only be informed to the vhost-user backend once
>> without virtio device reset.
>>
>> That means, if you allocate queues again after the vhost-user negotiation,
>> the vhost-user backend will not be informed any more. Leading to a state
>> that the vring info mismatches between virtio PMD driver and vhost-backend:
>> the driver switches to the new address has just been allocated, while the
>> vhost-backend still sticks to the old address has been assigned in the init
>> stage.
>>
>> Unfortunately, that is exactly how the virtio driver is coded so far: queue
>> allocation is done at queue_setup stage (when rte_eth_tx/rx_queue_setup is
>> invoked). This is wrong, because queue_setup can be invoked several times.
>> For example,
>>
>>     $ start_testpmd.sh ... --txq=1 --rxq=1 ...
>>     > port stop 0
>>     > port config all txq 1 # just trigger the queue_setup callback again
>>     > port config all rxq 1
>>     > port start 0
>>
>> The right way to do is allocate the queues in the init stage, so that the
>> vring info could be persistent with the vhost-user backend.
>>
>> Besides that, we should allocate max_queue pairs the device supports, but
>> not nr queue pairs firstly configured, to make following case work.
>>
>>     $ start_testpmd.sh ... --txq=1 --rxq=1 ...
>>     > port stop 0
>>     > port config all txq 2
>>     > port config all rxq 2
>>     > port start 0
> 
> hi Yuanhan, firstly - thanks for this patchset. It is certainly needed
> to fix the silent failure after increase num q's.
> 
> I tried a few tests and I'm seeing an issue. I can stop the port,
> increase the number of queues and traffic is ok, but if I try to
> decrease the number of queues it hangs on port start. I'm running head
> of the master with your patches in the guest and 16.07 in the host.
> 
> $ testpmd -c 0x5f -n 4 --socket-mem 1024 -- --burst=64 -i
> --disable-hw-vlan --rxq=2 --txq=2 --rxd=256 --txd=256 --forward-mode=io
>> port stop all
>> port config all rxq 1
>> port config all txq 1
>> port start all
> Configuring Port 0 (socket 0)
> (hang here)
> 
> I've tested a few different scenarios and anytime the queues are
> decreased from the previous number the hang occurs.
> 
> I can debug further but wanted to report early as maybe issue is an
> obvious one?

virtio_dev_start() is getting stuck as soon as it needs to send a
command using virtio_send_command() fn. at this line of code:

	while (VIRTQUEUE_NUSED(vq) == 0) {
		rte_rmb();
		usleep(100);
	}

whereby:
#define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_ring.used->idx -
(vq)->vq_used_cons_idx))

and they are both the same value (7 in my case).

> 
> thanks,
> Kevin.
> 
>>
>> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
>> ---
>>  drivers/net/virtio/virtio_ethdev.c | 105 +++++++++++++++++++++++--------------
>>  drivers/net/virtio/virtio_ethdev.h |   8 ---
>>  drivers/net/virtio/virtio_pci.h    |   2 +
>>  drivers/net/virtio/virtio_rxtx.c   |  38 +++++++-------
>>  4 files changed, 85 insertions(+), 68 deletions(-)
>>
>> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
>> index 5a2c14b..253bcb5 100644
>> --- a/drivers/net/virtio/virtio_ethdev.c
>> +++ b/drivers/net/virtio/virtio_ethdev.c
>> @@ -295,13 +295,19 @@ virtio_dev_queue_release(struct virtqueue *vq)
>>  	}
>>  }
>>  
>> -int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>> -			int queue_type,
>> -			uint16_t queue_idx,
>> -			uint16_t vtpci_queue_idx,
>> -			uint16_t nb_desc,
>> -			unsigned int socket_id,
>> -			void **pvq)
>> +static int
>> +virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx)
>> +{
>> +	if (vtpci_queue_idx == hw->max_queue_pairs * 2)
>> +		return VTNET_CQ;
>> +	else if (vtpci_queue_idx % 2 == 0)
>> +		return VTNET_RQ;
>> +	else
>> +		return VTNET_TQ;
>> +}
>> +
>> +static int
>> +virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
>>  {
>>  	char vq_name[VIRTQUEUE_MAX_NAME_SZ];
>>  	char vq_hdr_name[VIRTQUEUE_MAX_NAME_SZ];
>> @@ -314,6 +320,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>>  	struct virtqueue *vq;
>>  	size_t sz_hdr_mz = 0;
>>  	void *sw_ring = NULL;
>> +	int queue_type = virtio_get_queue_type(hw, vtpci_queue_idx);
>>  	int ret;
>>  
>>  	PMD_INIT_LOG(DEBUG, "setting up queue: %u", vtpci_queue_idx);
>> @@ -351,18 +358,18 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>>  		sz_hdr_mz = PAGE_SIZE;
>>  	}
>>  
>> -	vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, socket_id);
>> +	vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE,
>> +				SOCKET_ID_ANY);
>>  	if (vq == NULL) {
>>  		PMD_INIT_LOG(ERR, "can not allocate vq");
>>  		return -ENOMEM;
>>  	}
>> +	hw->vqs[vtpci_queue_idx] = vq;
>> +
>>  	vq->hw = hw;
>>  	vq->vq_queue_index = vtpci_queue_idx;
>>  	vq->vq_nentries = vq_size;
>> -
>> -	if (nb_desc == 0 || nb_desc > vq_size)
>> -		nb_desc = vq_size;
>> -	vq->vq_free_cnt = nb_desc;
>> +	vq->vq_free_cnt = vq_size;
>>  
>>  	/*
>>  	 * Reserve a memzone for vring elements
>> @@ -372,7 +379,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>>  	PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d",
>>  		     size, vq->vq_ring_size);
>>  
>> -	mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size, socket_id,
>> +	mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
>> +					 SOCKET_ID_ANY,
>>  					 0, VIRTIO_PCI_VRING_ALIGN);
>>  	if (mz == NULL) {
>>  		if (rte_errno == EEXIST)
>> @@ -396,7 +404,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>>  		snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
>>  			 dev->data->port_id, vtpci_queue_idx);
>>  		hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
>> -						     socket_id, 0,
>> +						     SOCKET_ID_ANY, 0,
>>  						     RTE_CACHE_LINE_SIZE);
>>  		if (hdr_mz == NULL) {
>>  			if (rte_errno == EEXIST)
>> @@ -413,7 +421,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>>  			       sizeof(vq->sw_ring[0]);
>>  
>>  		sw_ring = rte_zmalloc_socket("sw_ring", sz_sw,
>> -					     RTE_CACHE_LINE_SIZE, socket_id);
>> +				RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
>>  		if (!sw_ring) {
>>  			PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
>>  			ret = -ENOMEM;
>> @@ -424,19 +432,14 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>>  		rxvq = &vq->rxq;
>>  		rxvq->vq = vq;
>>  		rxvq->port_id = dev->data->port_id;
>> -		rxvq->queue_id = queue_idx;
>>  		rxvq->mz = mz;
>> -		*pvq = rxvq;
>>  	} else if (queue_type == VTNET_TQ) {
>>  		txvq = &vq->txq;
>>  		txvq->vq = vq;
>>  		txvq->port_id = dev->data->port_id;
>> -		txvq->queue_id = queue_idx;
>>  		txvq->mz = mz;
>>  		txvq->virtio_net_hdr_mz = hdr_mz;
>>  		txvq->virtio_net_hdr_mem = hdr_mz->phys_addr;
>> -
>> -		*pvq = txvq;
>>  	} else if (queue_type == VTNET_CQ) {
>>  		cvq = &vq->cq;
>>  		cvq->vq = vq;
>> @@ -444,7 +447,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>>  		cvq->virtio_net_hdr_mz = hdr_mz;
>>  		cvq->virtio_net_hdr_mem = hdr_mz->phys_addr;
>>  		memset(cvq->virtio_net_hdr_mz->addr, 0, PAGE_SIZE);
>> -		*pvq = cvq;
>> +
>> +		hw->cvq = cvq;
>>  	}
>>  
>>  	/* For virtio_user case (that is when dev->pci_dev is NULL), we use
>> @@ -502,23 +506,45 @@ fail_q_alloc:
>>  }
>>  
>>  static int
>> -virtio_dev_cq_queue_setup(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx,
>> -		uint32_t socket_id)
>> +virtio_alloc_queues(struct rte_eth_dev *dev)
>>  {
>> -	struct virtnet_ctl *cvq;
>> -	int ret;
>>  	struct virtio_hw *hw = dev->data->dev_private;
>> +	uint16_t nr_vq = hw->max_queue_pairs * 2;
>> +	uint16_t i;
>> +	int ret;
>>  
>> -	PMD_INIT_FUNC_TRACE();
>> -	ret = virtio_dev_queue_setup(dev, VTNET_CQ, VTNET_SQ_CQ_QUEUE_IDX,
>> -			vtpci_queue_idx, 0, socket_id, (void **)&cvq);
>> -	if (ret < 0) {
>> -		PMD_INIT_LOG(ERR, "control vq initialization failed");
>> -		return ret;
>> +	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
>> +		nr_vq += 1;
>> +
>> +	hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0);
>> +	if (!hw->vqs) {
>> +		PMD_INIT_LOG(ERR, "failed to allocate vqs");
>> +		return -ENOMEM;
>> +	}
>> +
>> +	for (i = 0; i < nr_vq; i++) {
>> +		ret = virtio_init_queue(dev, i);
>> +		if (ret < 0)
>> +			goto cleanup;
>>  	}
>>  
>> -	hw->cvq = cvq;
>>  	return 0;
>> +
>> +cleanup:
>> +	/*
>> +	 * ctrl queue is the last queue; if we go here, it means the ctrl
>> +	 * queue is not allocated, that we can do no cleanup for it here.
>> +	 */
>> +	while (i > 0) {
>> +		i--;
>> +		if (i % 2 == 0)
>> +			virtio_dev_rx_queue_release(&hw->vqs[i]->rxq);
>> +		else
>> +			virtio_dev_tx_queue_release(&hw->vqs[i]->txq);
>> +	}
>> +	rte_free(hw->vqs);
>> +
>> +	return ret;
>>  }
>>  
>>  static void
>> @@ -1141,6 +1167,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
>>  	struct virtio_net_config *config;
>>  	struct virtio_net_config local_config;
>>  	struct rte_pci_device *pci_dev = eth_dev->pci_dev;
>> +	int ret;
>>  
>>  	/* Reset the device although not necessary at startup */
>>  	vtpci_reset(hw);
>> @@ -1222,6 +1249,10 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
>>  		hw->max_queue_pairs = 1;
>>  	}
>>  
>> +	ret = virtio_alloc_queues(eth_dev);
>> +	if (ret < 0)
>> +		return ret;
>> +
>>  	if (pci_dev)
>>  		PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
>>  			eth_dev->data->port_id, pci_dev->id.vendor_id,
>> @@ -1390,15 +1421,9 @@ virtio_dev_configure(struct rte_eth_dev *dev)
>>  		return -ENOTSUP;
>>  	}
>>  
>> -	/* Setup and start control queue */
>> -	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
>> -		ret = virtio_dev_cq_queue_setup(dev,
>> -			hw->max_queue_pairs * 2,
>> -			SOCKET_ID_ANY);
>> -		if (ret < 0)
>> -			return ret;
>> +	/* start control queue */
>> +	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
>>  		virtio_dev_cq_start(dev);
>> -	}
>>  
>>  	hw->vlan_strip = rxmode->hw_vlan_strip;
>>  
>> diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
>> index de33b32..5db8632 100644
>> --- a/drivers/net/virtio/virtio_ethdev.h
>> +++ b/drivers/net/virtio/virtio_ethdev.h
>> @@ -80,14 +80,6 @@ void virtio_dev_cq_start(struct rte_eth_dev *dev);
>>   */
>>  void virtio_dev_rxtx_start(struct rte_eth_dev *dev);
>>  
>> -int virtio_dev_queue_setup(struct rte_eth_dev *dev,
>> -			int queue_type,
>> -			uint16_t queue_idx,
>> -			uint16_t vtpci_queue_idx,
>> -			uint16_t nb_desc,
>> -			unsigned int socket_id,
>> -			void **pvq);
>> -
>>  void virtio_dev_queue_release(struct virtqueue *vq);
>>  
>>  int  virtio_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>> diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
>> index 0c5ed31..f63f76c 100644
>> --- a/drivers/net/virtio/virtio_pci.h
>> +++ b/drivers/net/virtio/virtio_pci.h
>> @@ -264,6 +264,8 @@ struct virtio_hw {
>>  	struct virtio_net_config *dev_cfg;
>>  	const struct virtio_pci_ops *vtpci_ops;
>>  	void	    *virtio_user_dev;
>> +
>> +	struct virtqueue **vqs;
>>  };
>>  
>>  /*
>> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
>> index b4c4aa4..fb703d2 100644
>> --- a/drivers/net/virtio/virtio_rxtx.c
>> +++ b/drivers/net/virtio/virtio_rxtx.c
>> @@ -530,24 +530,24 @@ int
>>  virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
>>  			uint16_t queue_idx,
>>  			uint16_t nb_desc,
>> -			unsigned int socket_id,
>> +			unsigned int socket_id __rte_unused,
>>  			__rte_unused const struct rte_eth_rxconf *rx_conf,
>>  			struct rte_mempool *mp)
>>  {
>>  	uint16_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_RQ_QUEUE_IDX;
>> +	struct virtio_hw *hw = dev->data->dev_private;
>> +	struct virtqueue *vq = hw->vqs[vtpci_queue_idx];
>>  	struct virtnet_rx *rxvq;
>> -	int ret;
>>  
>>  	PMD_INIT_FUNC_TRACE();
>> -	ret = virtio_dev_queue_setup(dev, VTNET_RQ, queue_idx, vtpci_queue_idx,
>> -			nb_desc, socket_id, (void **)&rxvq);
>> -	if (ret < 0) {
>> -		PMD_INIT_LOG(ERR, "rvq initialization failed");
>> -		return ret;
>> -	}
>>  
>> -	/* Create mempool for rx mbuf allocation */
>> +	if (nb_desc == 0 || nb_desc > vq->vq_nentries)
>> +		nb_desc = vq->vq_nentries;
>> +	vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
>> +
>> +	rxvq = &vq->rxq;
>>  	rxvq->mpool = mp;
>> +	rxvq->queue_id = queue_idx;
>>  
>>  	dev->data->rx_queues[queue_idx] = rxvq;
>>  
>> @@ -613,27 +613,25 @@ int
>>  virtio_dev_tx_queue_setup(struct rte_eth_dev *dev,
>>  			uint16_t queue_idx,
>>  			uint16_t nb_desc,
>> -			unsigned int socket_id,
>> +			unsigned int socket_id __rte_unused,
>>  			const struct rte_eth_txconf *tx_conf)
>>  {
>>  	uint8_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_TQ_QUEUE_IDX;
>> +	struct virtio_hw *hw = dev->data->dev_private;
>> +	struct virtqueue *vq = hw->vqs[vtpci_queue_idx];
>>  	struct virtnet_tx *txvq;
>> -	struct virtqueue *vq;
>>  	uint16_t tx_free_thresh;
>> -	int ret;
>>  
>>  	PMD_INIT_FUNC_TRACE();
>>  
>> -
>>  	virtio_update_rxtx_handler(dev, tx_conf);
>>  
>> -	ret = virtio_dev_queue_setup(dev, VTNET_TQ, queue_idx, vtpci_queue_idx,
>> -			nb_desc, socket_id, (void **)&txvq);
>> -	if (ret < 0) {
>> -		PMD_INIT_LOG(ERR, "tvq initialization failed");
>> -		return ret;
>> -	}
>> -	vq = txvq->vq;
>> +	if (nb_desc == 0 || nb_desc > vq->vq_nentries)
>> +		nb_desc = vq->vq_nentries;
>> +	vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
>> +
>> +	txvq = &vq->txq;
>> +	txvq->queue_id = queue_idx;
>>  
>>  	tx_free_thresh = tx_conf->tx_free_thresh;
>>  	if (tx_free_thresh == 0)
>>
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage
  2016-11-04 20:30     ` Kevin Traynor
@ 2016-11-05  6:15       ` Yuanhan Liu
  0 siblings, 0 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-05  6:15 UTC (permalink / raw)
  To: Kevin Traynor
  Cc: dev, Thomas Monjalon, Tan Jianfeng, Ilya Maximets, Kyle Larose

On Fri, Nov 04, 2016 at 08:30:00PM +0000, Kevin Traynor wrote:
> On 11/04/2016 03:21 PM, Kevin Traynor wrote:
> > On 11/03/2016 04:09 PM, Yuanhan Liu wrote:
> >> Queue allocation should be done once, since the queue related info (such
> >> as vring addreess) will only be informed to the vhost-user backend once
> >> without virtio device reset.
> >>
> >> That means, if you allocate queues again after the vhost-user negotiation,
> >> the vhost-user backend will not be informed any more. Leading to a state
> >> that the vring info mismatches between virtio PMD driver and vhost-backend:
> >> the driver switches to the new address has just been allocated, while the
> >> vhost-backend still sticks to the old address has been assigned in the init
> >> stage.
> >>
> >> Unfortunately, that is exactly how the virtio driver is coded so far: queue
> >> allocation is done at queue_setup stage (when rte_eth_tx/rx_queue_setup is
> >> invoked). This is wrong, because queue_setup can be invoked several times.
> >> For example,
> >>
> >>     $ start_testpmd.sh ... --txq=1 --rxq=1 ...
> >>     > port stop 0
> >>     > port config all txq 1 # just trigger the queue_setup callback again
> >>     > port config all rxq 1
> >>     > port start 0
> >>
> >> The right way to do is allocate the queues in the init stage, so that the
> >> vring info could be persistent with the vhost-user backend.
> >>
> >> Besides that, we should allocate max_queue pairs the device supports, but
> >> not nr queue pairs firstly configured, to make following case work.
> >>
> >>     $ start_testpmd.sh ... --txq=1 --rxq=1 ...
> >>     > port stop 0
> >>     > port config all txq 2
> >>     > port config all rxq 2
> >>     > port start 0
> > 
> > hi Yuanhan, firstly - thanks for this patchset. It is certainly needed
> > to fix the silent failure after increase num q's.
> > 
> > I tried a few tests and I'm seeing an issue. I can stop the port,
> > increase the number of queues and traffic is ok, but if I try to
> > decrease the number of queues it hangs on port start. I'm running head
> > of the master with your patches in the guest and 16.07 in the host.
> > 
> > $ testpmd -c 0x5f -n 4 --socket-mem 1024 -- --burst=64 -i
> > --disable-hw-vlan --rxq=2 --txq=2 --rxd=256 --txd=256 --forward-mode=io
> >> port stop all
> >> port config all rxq 1
> >> port config all txq 1
> >> port start all
> > Configuring Port 0 (socket 0)
> > (hang here)
> > 
> > I've tested a few different scenarios and anytime the queues are
> > decreased from the previous number the hang occurs.
> > 
> > I can debug further but wanted to report early as maybe issue is an
> > obvious one?

Kevin, thanks for testing! Hmm, it's a case I missed: I was thinking/testing
more about increasing (but not shrinking) the queue size :(

> virtio_dev_start() is getting stuck as soon as it needs to send a

That's because the connection is closed (for a bad reason, see detailes
below). You could figure it out quickly from the vhost log:

    testpmd> VHOST_CONFIG: read message VHOST_USER_GET_VRING_BASE
    PMD: Connection closed

Those messages showed immediately after you execute "port start all".

It's actually triggered by the "rx_queue_release", which in turn invokes
the "del_queue" virtio-pci method. QEMU then resets the device once
the message is got. Thus, you saw above log.

Since we now allocate queue once, it doesn't make sense to free those
queues and invoke the "del_queue" method at rx/tx_queue_release callback
then. The queue_release callback will be invoked when we shrink the
queue size.

And then I saw this case works like a charm.

I'm about to catch a train soon, but I will try to re-post v2 today, with
another minor fix I noticed while checking this issue: we should also send
the VIRTIO_NET_CTRL_MQ message while the queues shrinks from 2 to 1.

	--yliu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue
  2016-11-03 16:09 [dpdk-dev] [PATCH 0/8] net/virtio: fix queue reconfigure issue Yuanhan Liu
                   ` (7 preceding siblings ...)
  2016-11-03 16:10 ` [dpdk-dev] [PATCH 8/8] net/virtio: remove started field Yuanhan Liu
@ 2016-11-05  9:40 ` Yuanhan Liu
  2016-11-05  9:40   ` [dpdk-dev] [PATCH v2 01/10] net/virtio: revert fix restart Yuanhan Liu
                     ` (11 more replies)
  8 siblings, 12 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-05  9:40 UTC (permalink / raw)
  To: dev
  Cc: Thomas Monjalon, Tan Jianfeng, Kevin Traynor, Ilya Maximets,
	Kyle Larose, Maxime Coquelin, Yuanhan Liu

This patchset fixes few issues related to virtio queue reconfigure:
increase or shrink the queue number. The major issue and the reason
behind is described with length details in patch 4 "net/virtio: allocate
queue at init stage".

Those bugs can not be fixed by few lines of code, it's because the
current driver init logic is quite wrong, that I need change quite many
places to make it right. Meanwhile, I have already done my best to keep 
the changes being as minimal as possible, so that we could have fewer
changes to break something else; also, it's would be easier for review.

v2: - fix two more minor issues regarding to queue enabling; see patch 9
      and 10.
    - refined commit log a bit.

Thanks.

	--yliu

---
Yuanhan Liu (10):
  net/virtio: revert fix restart
  net/virtio: simplify queue memzone name
  net/virtio: simplify queue allocation
  net/virtio: allocate queue at init stage
  net/virtio: initiate vring at init stage
  net/virtio: move queue configure code to proper place
  net/virtio: complete init stage at the right place
  net/virtio: remove started field
  net/virtio: fix less queues being enabled issue
  net/virtio: fix multiple queue enabling

 drivers/net/virtio/virtio_ethdev.c | 248 +++++++++++++++++--------------
 drivers/net/virtio/virtio_ethdev.h |  16 --
 drivers/net/virtio/virtio_pci.h    |   3 +-
 drivers/net/virtio/virtio_rxtx.c   | 291 ++++++++++++-------------------------
 drivers/net/virtio/virtqueue.h     |   7 +
 5 files changed, 237 insertions(+), 328 deletions(-)

-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH v2 01/10] net/virtio: revert fix restart
  2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
@ 2016-11-05  9:40   ` Yuanhan Liu
  2016-11-05  9:40   ` [dpdk-dev] [PATCH v2 02/10] net/virtio: simplify queue memzone name Yuanhan Liu
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-05  9:40 UTC (permalink / raw)
  To: dev
  Cc: Thomas Monjalon, Tan Jianfeng, Kevin Traynor, Ilya Maximets,
	Kyle Larose, Maxime Coquelin, Yuanhan Liu, stable

This reverts commit 9a0615af7746 ("virtio: fix restart"); conflict is
manually addressed.

Kyle reported an issue with above commit

    qemu-kvm: Guest moved used index from 5 to 1

with following steps,

    1) Start my virtio interfaces
    2) Send some traffic into/out of the interfaces
    3) Stop the interfaces
    4) Start the interfaces
    5) Send some more traffic

And here are some quotes from Kyle's analysis,

    Prior to the patch, if an interface were stopped then started, without
    restarting the application, the queues would be left as-is, because
    hw->started would be set to 1. Now, calling stop sets hw->started to 0,
    which means the next call to start will "touch the queues". This is the
    unintended side-effect that causes the problem.

We should not touch the queues once the init is done, otherwise, the vring
state of virtio PMD driver and vhost-user would be inconsistent, leading
some issue like above.

Thus this patch is reverted.

Fixes: 9a0615af7746 ("virtio: fix restart")

Cc: Jianfeng Tan <jianfeng.tan@intel.com>
Cc: <stable@dpdk.org>
Reported-by: Kyle Larose <klarose@sandvine.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 drivers/net/virtio/virtio_ethdev.c | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index b388134..5815875 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -550,9 +550,6 @@ virtio_dev_close(struct rte_eth_dev *dev)
 
 	PMD_INIT_LOG(DEBUG, "virtio_dev_close");
 
-	if (hw->started == 1)
-		virtio_dev_stop(dev);
-
 	if (hw->cvq)
 		virtio_dev_queue_release(hw->cvq->vq);
 
@@ -560,6 +557,7 @@ virtio_dev_close(struct rte_eth_dev *dev)
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
 		vtpci_irq_config(hw, VIRTIO_MSI_NO_VECTOR);
 	vtpci_reset(hw);
+	hw->started = 0;
 	virtio_dev_free_mbufs(dev);
 	virtio_free_queues(dev);
 }
@@ -1296,15 +1294,17 @@ static int
 eth_virtio_dev_uninit(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev;
+	struct virtio_hw *hw = eth_dev->data->dev_private;
 
 	PMD_INIT_FUNC_TRACE();
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
 		return -EPERM;
 
-	/* Close it anyway since there's no way to know if closed */
-	virtio_dev_close(eth_dev);
-
+	if (hw->started == 1) {
+		virtio_dev_stop(eth_dev);
+		virtio_dev_close(eth_dev);
+	}
 	pci_dev = eth_dev->pci_dev;
 
 	eth_dev->dev_ops = NULL;
@@ -1543,12 +1543,9 @@ static void
 virtio_dev_stop(struct rte_eth_dev *dev)
 {
 	struct rte_eth_link link;
-	struct virtio_hw *hw = dev->data->dev_private;
 
 	PMD_INIT_LOG(DEBUG, "stop");
 
-	hw->started = 0;
-
 	if (dev->data->dev_conf.intr_conf.lsc)
 		rte_intr_disable(&dev->pci_dev->intr_handle);
 
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH v2 02/10] net/virtio: simplify queue memzone name
  2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
  2016-11-05  9:40   ` [dpdk-dev] [PATCH v2 01/10] net/virtio: revert fix restart Yuanhan Liu
@ 2016-11-05  9:40   ` Yuanhan Liu
  2016-11-05  9:40   ` [dpdk-dev] [PATCH v2 03/10] net/virtio: simplify queue allocation Yuanhan Liu
                     ` (9 subsequent siblings)
  11 siblings, 0 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-05  9:40 UTC (permalink / raw)
  To: dev
  Cc: Thomas Monjalon, Tan Jianfeng, Kevin Traynor, Ilya Maximets,
	Kyle Larose, Maxime Coquelin, Yuanhan Liu

Instead of setting up a queue memzone name like "port0_rxq0", "port0_txq0",
it could be simplified a bit to something like "port0_vq0", "port0_vq1" ...

Meanwhile, the code is also simplified a bit.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 drivers/net/virtio/virtio_ethdev.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 5815875..d082df5 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -312,7 +312,6 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 	struct virtnet_tx *txvq = NULL;
 	struct virtnet_ctl *cvq = NULL;
 	struct virtqueue *vq;
-	const char *queue_names[] = {"rvq", "txq", "cvq"};
 	size_t sz_vq, sz_q = 0, sz_hdr_mz = 0;
 	void *sw_ring = NULL;
 	int ret;
@@ -335,8 +334,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	snprintf(vq_name, sizeof(vq_name), "port%d_%s%d",
-		 dev->data->port_id, queue_names[queue_type], queue_idx);
+	snprintf(vq_name, sizeof(vq_name), "port%d_vq%d",
+		 dev->data->port_id, vtpci_queue_idx);
 
 	sz_vq = RTE_ALIGN_CEIL(sizeof(*vq) +
 				vq_size * sizeof(struct vq_desc_extra),
@@ -398,9 +397,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		     (uint64_t)(uintptr_t)mz->addr);
 
 	if (sz_hdr_mz) {
-		snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_%s%d_hdr",
-			 dev->data->port_id, queue_names[queue_type],
-			 queue_idx);
+		snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
+			 dev->data->port_id, vtpci_queue_idx);
 		hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
 						     socket_id, 0,
 						     RTE_CACHE_LINE_SIZE);
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH v2 03/10] net/virtio: simplify queue allocation
  2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
  2016-11-05  9:40   ` [dpdk-dev] [PATCH v2 01/10] net/virtio: revert fix restart Yuanhan Liu
  2016-11-05  9:40   ` [dpdk-dev] [PATCH v2 02/10] net/virtio: simplify queue memzone name Yuanhan Liu
@ 2016-11-05  9:40   ` Yuanhan Liu
  2016-11-05  9:40   ` [dpdk-dev] [PATCH v2 04/10] net/virtio: allocate queue at init stage Yuanhan Liu
                     ` (8 subsequent siblings)
  11 siblings, 0 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-05  9:40 UTC (permalink / raw)
  To: dev
  Cc: Thomas Monjalon, Tan Jianfeng, Kevin Traynor, Ilya Maximets,
	Kyle Larose, Maxime Coquelin, Yuanhan Liu

Let rxq/txq/cq be the union field of the virtqueue struct. This would
simplifies the vq allocation a bit: we don't need calculate the vq_size
any more based on the queue type.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 drivers/net/virtio/virtio_ethdev.c | 18 +++++++-----------
 drivers/net/virtio/virtqueue.h     |  7 +++++++
 2 files changed, 14 insertions(+), 11 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index d082df5..5a2c14b 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -312,7 +312,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 	struct virtnet_tx *txvq = NULL;
 	struct virtnet_ctl *cvq = NULL;
 	struct virtqueue *vq;
-	size_t sz_vq, sz_q = 0, sz_hdr_mz = 0;
+	size_t sz_hdr_mz = 0;
 	void *sw_ring = NULL;
 	int ret;
 
@@ -337,25 +337,21 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 	snprintf(vq_name, sizeof(vq_name), "port%d_vq%d",
 		 dev->data->port_id, vtpci_queue_idx);
 
-	sz_vq = RTE_ALIGN_CEIL(sizeof(*vq) +
+	size = RTE_ALIGN_CEIL(sizeof(*vq) +
 				vq_size * sizeof(struct vq_desc_extra),
 				RTE_CACHE_LINE_SIZE);
-	if (queue_type == VTNET_RQ) {
-		sz_q = sz_vq + sizeof(*rxvq);
-	} else if (queue_type == VTNET_TQ) {
-		sz_q = sz_vq + sizeof(*txvq);
+	if (queue_type == VTNET_TQ) {
 		/*
 		 * For each xmit packet, allocate a virtio_net_hdr
 		 * and indirect ring elements
 		 */
 		sz_hdr_mz = vq_size * sizeof(struct virtio_tx_region);
 	} else if (queue_type == VTNET_CQ) {
-		sz_q = sz_vq + sizeof(*cvq);
 		/* Allocate a page for control vq command, data and status */
 		sz_hdr_mz = PAGE_SIZE;
 	}
 
-	vq = rte_zmalloc_socket(vq_name, sz_q, RTE_CACHE_LINE_SIZE, socket_id);
+	vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, socket_id);
 	if (vq == NULL) {
 		PMD_INIT_LOG(ERR, "can not allocate vq");
 		return -ENOMEM;
@@ -425,14 +421,14 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		}
 
 		vq->sw_ring = sw_ring;
-		rxvq = (struct virtnet_rx *)RTE_PTR_ADD(vq, sz_vq);
+		rxvq = &vq->rxq;
 		rxvq->vq = vq;
 		rxvq->port_id = dev->data->port_id;
 		rxvq->queue_id = queue_idx;
 		rxvq->mz = mz;
 		*pvq = rxvq;
 	} else if (queue_type == VTNET_TQ) {
-		txvq = (struct virtnet_tx *)RTE_PTR_ADD(vq, sz_vq);
+		txvq = &vq->txq;
 		txvq->vq = vq;
 		txvq->port_id = dev->data->port_id;
 		txvq->queue_id = queue_idx;
@@ -442,7 +438,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 
 		*pvq = txvq;
 	} else if (queue_type == VTNET_CQ) {
-		cvq = (struct virtnet_ctl *)RTE_PTR_ADD(vq, sz_vq);
+		cvq = &vq->cq;
 		cvq->vq = vq;
 		cvq->mz = mz;
 		cvq->virtio_net_hdr_mz = hdr_mz;
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index ef0027b..bbeb2f2 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -44,6 +44,7 @@
 #include "virtio_pci.h"
 #include "virtio_ring.h"
 #include "virtio_logs.h"
+#include "virtio_rxtx.h"
 
 struct rte_mbuf;
 
@@ -191,6 +192,12 @@ struct virtqueue {
 	void *vq_ring_virt_mem;  /**< linear address of vring*/
 	unsigned int vq_ring_size;
 
+	union {
+		struct virtnet_rx rxq;
+		struct virtnet_tx txq;
+		struct virtnet_ctl cq;
+	};
+
 	phys_addr_t vq_ring_mem; /**< physical address of vring,
 				  * or virtual address for virtio_user. */
 
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH v2 04/10] net/virtio: allocate queue at init stage
  2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
                     ` (2 preceding siblings ...)
  2016-11-05  9:40   ` [dpdk-dev] [PATCH v2 03/10] net/virtio: simplify queue allocation Yuanhan Liu
@ 2016-11-05  9:40   ` Yuanhan Liu
  2016-11-07 14:23     ` Thomas Monjalon
  2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 05/10] net/virtio: initiate vring " Yuanhan Liu
                     ` (7 subsequent siblings)
  11 siblings, 1 reply; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-05  9:40 UTC (permalink / raw)
  To: dev
  Cc: Thomas Monjalon, Tan Jianfeng, Kevin Traynor, Ilya Maximets,
	Kyle Larose, Maxime Coquelin, Yuanhan Liu

Queue allocation should be done once, since the queue related info (such
as vring addreess) will only be informed to the vhost-user backend once
without virtio device reset.

That means, if you allocate queues again after the vhost-user negotiation,
the vhost-user backend will not be informed any more. Leading to a state
that the vring info mismatches between virtio PMD driver and vhost-backend:
the driver switches to the new address has just been allocated, while the
vhost-backend still sticks to the old address has been assigned in the init
stage.

Unfortunately, that is exactly how the virtio driver is coded so far: queue
allocation is done at queue_setup stage (when rte_eth_tx/rx_queue_setup is
invoked). This is wrong, because queue_setup can be invoked several times.
For example,

    $ start_testpmd.sh ... --txq=1 --rxq=1 ...
    > port stop 0
    > port config all txq 1 # just trigger the queue_setup callback again
    > port config all rxq 1
    > port start 0

The right way to do is allocate the queues in the init stage, so that the
vring info could be persistent with the vhost-user backend.

Besides that, we should allocate max_queue pairs the device supports, but
not nr queue pairs firstly configured, to make following case work.

    $ start_testpmd.sh ... --txq=1 --rxq=1 ...
    > port stop 0
    > port config all txq 2
    > port config all rxq 2
    > port start 0

Since the allocation is switched to init stage, the free should also
moved from the rx/tx_queue_release to dev close stage. That leading we
could do nothing an empty rx/tx_queue_release() implementation.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
v2: - free queues on dev close
---
 drivers/net/virtio/virtio_ethdev.c | 162 +++++++++++++++++++++----------------
 drivers/net/virtio/virtio_ethdev.h |  14 ----
 drivers/net/virtio/virtio_pci.h    |   2 +
 drivers/net/virtio/virtio_rxtx.c   |  83 +++++--------------
 drivers/net/virtio/virtqueue.h     |   1 -
 5 files changed, 111 insertions(+), 151 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 5a2c14b..67a175d 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -280,28 +280,36 @@ virtio_set_multiple_queues(struct rte_eth_dev *dev, uint16_t nb_queues)
 	return 0;
 }
 
-void
-virtio_dev_queue_release(struct virtqueue *vq)
+static void
+virtio_dev_queue_release(void *queue __rte_unused)
 {
-	struct virtio_hw *hw;
+	/* do nothing */
+}
 
-	if (vq) {
-		hw = vq->hw;
-		if (vq->configured)
-			hw->vtpci_ops->del_queue(hw, vq);
+static int
+virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx)
+{
+	if (vtpci_queue_idx == hw->max_queue_pairs * 2)
+		return VTNET_CQ;
+	else if (vtpci_queue_idx % 2 == 0)
+		return VTNET_RQ;
+	else
+		return VTNET_TQ;
+}
 
-		rte_free(vq->sw_ring);
-		rte_free(vq);
-	}
+static uint16_t
+virtio_get_nr_vq(struct virtio_hw *hw)
+{
+	uint16_t nr_vq = hw->max_queue_pairs * 2;
+
+	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
+		nr_vq += 1;
+
+	return nr_vq;
 }
 
-int virtio_dev_queue_setup(struct rte_eth_dev *dev,
-			int queue_type,
-			uint16_t queue_idx,
-			uint16_t vtpci_queue_idx,
-			uint16_t nb_desc,
-			unsigned int socket_id,
-			void **pvq)
+static int
+virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
 {
 	char vq_name[VIRTQUEUE_MAX_NAME_SZ];
 	char vq_hdr_name[VIRTQUEUE_MAX_NAME_SZ];
@@ -314,6 +322,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 	struct virtqueue *vq;
 	size_t sz_hdr_mz = 0;
 	void *sw_ring = NULL;
+	int queue_type = virtio_get_queue_type(hw, vtpci_queue_idx);
 	int ret;
 
 	PMD_INIT_LOG(DEBUG, "setting up queue: %u", vtpci_queue_idx);
@@ -351,18 +360,18 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		sz_hdr_mz = PAGE_SIZE;
 	}
 
-	vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, socket_id);
+	vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE,
+				SOCKET_ID_ANY);
 	if (vq == NULL) {
 		PMD_INIT_LOG(ERR, "can not allocate vq");
 		return -ENOMEM;
 	}
+	hw->vqs[vtpci_queue_idx] = vq;
+
 	vq->hw = hw;
 	vq->vq_queue_index = vtpci_queue_idx;
 	vq->vq_nentries = vq_size;
-
-	if (nb_desc == 0 || nb_desc > vq_size)
-		nb_desc = vq_size;
-	vq->vq_free_cnt = nb_desc;
+	vq->vq_free_cnt = vq_size;
 
 	/*
 	 * Reserve a memzone for vring elements
@@ -372,7 +381,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 	PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d",
 		     size, vq->vq_ring_size);
 
-	mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size, socket_id,
+	mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
+					 SOCKET_ID_ANY,
 					 0, VIRTIO_PCI_VRING_ALIGN);
 	if (mz == NULL) {
 		if (rte_errno == EEXIST)
@@ -396,7 +406,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
 			 dev->data->port_id, vtpci_queue_idx);
 		hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
-						     socket_id, 0,
+						     SOCKET_ID_ANY, 0,
 						     RTE_CACHE_LINE_SIZE);
 		if (hdr_mz == NULL) {
 			if (rte_errno == EEXIST)
@@ -413,7 +423,7 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 			       sizeof(vq->sw_ring[0]);
 
 		sw_ring = rte_zmalloc_socket("sw_ring", sz_sw,
-					     RTE_CACHE_LINE_SIZE, socket_id);
+				RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 		if (!sw_ring) {
 			PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
 			ret = -ENOMEM;
@@ -424,19 +434,14 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		rxvq = &vq->rxq;
 		rxvq->vq = vq;
 		rxvq->port_id = dev->data->port_id;
-		rxvq->queue_id = queue_idx;
 		rxvq->mz = mz;
-		*pvq = rxvq;
 	} else if (queue_type == VTNET_TQ) {
 		txvq = &vq->txq;
 		txvq->vq = vq;
 		txvq->port_id = dev->data->port_id;
-		txvq->queue_id = queue_idx;
 		txvq->mz = mz;
 		txvq->virtio_net_hdr_mz = hdr_mz;
 		txvq->virtio_net_hdr_mem = hdr_mz->phys_addr;
-
-		*pvq = txvq;
 	} else if (queue_type == VTNET_CQ) {
 		cvq = &vq->cq;
 		cvq->vq = vq;
@@ -444,7 +449,8 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 		cvq->virtio_net_hdr_mz = hdr_mz;
 		cvq->virtio_net_hdr_mem = hdr_mz->phys_addr;
 		memset(cvq->virtio_net_hdr_mz->addr, 0, PAGE_SIZE);
-		*pvq = cvq;
+
+		hw->cvq = cvq;
 	}
 
 	/* For virtio_user case (that is when dev->pci_dev is NULL), we use
@@ -485,11 +491,9 @@ int virtio_dev_queue_setup(struct rte_eth_dev *dev,
 
 	if (hw->vtpci_ops->setup_queue(hw, vq) < 0) {
 		PMD_INIT_LOG(ERR, "setup_queue failed");
-		virtio_dev_queue_release(vq);
 		return -EINVAL;
 	}
 
-	vq->configured = 1;
 	return 0;
 
 fail_q_alloc:
@@ -501,40 +505,60 @@ fail_q_alloc:
 	return ret;
 }
 
-static int
-virtio_dev_cq_queue_setup(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx,
-		uint32_t socket_id)
+static void
+virtio_free_queues(struct virtio_hw *hw)
 {
-	struct virtnet_ctl *cvq;
-	int ret;
-	struct virtio_hw *hw = dev->data->dev_private;
+	uint16_t nr_vq = virtio_get_nr_vq(hw);
+	struct virtqueue *vq;
+	int queue_type;
+	uint16_t i;
 
-	PMD_INIT_FUNC_TRACE();
-	ret = virtio_dev_queue_setup(dev, VTNET_CQ, VTNET_SQ_CQ_QUEUE_IDX,
-			vtpci_queue_idx, 0, socket_id, (void **)&cvq);
-	if (ret < 0) {
-		PMD_INIT_LOG(ERR, "control vq initialization failed");
-		return ret;
+	for (i = 0; i < nr_vq; i++) {
+		vq = hw->vqs[i];
+		if (!vq)
+			continue;
+
+		queue_type = virtio_get_queue_type(hw, i);
+		if (queue_type == VTNET_RQ) {
+			rte_free(vq->sw_ring);
+			rte_memzone_free(vq->rxq.mz);
+		} else if (queue_type == VTNET_TQ) {
+			rte_memzone_free(vq->txq.mz);
+			rte_memzone_free(vq->txq.virtio_net_hdr_mz);
+		} else {
+			rte_memzone_free(vq->cq.mz);
+			rte_memzone_free(vq->cq.virtio_net_hdr_mz);
+		}
+
+		rte_free(vq);
 	}
 
-	hw->cvq = cvq;
-	return 0;
+	rte_free(hw->vqs);
 }
 
-static void
-virtio_free_queues(struct rte_eth_dev *dev)
+static int
+virtio_alloc_queues(struct rte_eth_dev *dev)
 {
-	unsigned int i;
-
-	for (i = 0; i < dev->data->nb_rx_queues; i++)
-		virtio_dev_rx_queue_release(dev->data->rx_queues[i]);
+	struct virtio_hw *hw = dev->data->dev_private;
+	uint16_t nr_vq = virtio_get_nr_vq(hw);
+	uint16_t i;
+	int ret;
 
-	dev->data->nb_rx_queues = 0;
+	hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0);
+	if (!hw->vqs) {
+		PMD_INIT_LOG(ERR, "failed to allocate vqs");
+		return -ENOMEM;
+	}
 
-	for (i = 0; i < dev->data->nb_tx_queues; i++)
-		virtio_dev_tx_queue_release(dev->data->tx_queues[i]);
+	for (i = 0; i < nr_vq; i++) {
+		ret = virtio_init_queue(dev, i);
+		if (ret < 0) {
+			virtio_free_queues(hw);
+			return ret;
+		}
+	}
 
-	dev->data->nb_tx_queues = 0;
+	return 0;
 }
 
 static void
@@ -544,16 +568,13 @@ virtio_dev_close(struct rte_eth_dev *dev)
 
 	PMD_INIT_LOG(DEBUG, "virtio_dev_close");
 
-	if (hw->cvq)
-		virtio_dev_queue_release(hw->cvq->vq);
-
 	/* reset the NIC */
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
 		vtpci_irq_config(hw, VIRTIO_MSI_NO_VECTOR);
 	vtpci_reset(hw);
 	hw->started = 0;
 	virtio_dev_free_mbufs(dev);
-	virtio_free_queues(dev);
+	virtio_free_queues(hw);
 }
 
 static void
@@ -686,9 +707,9 @@ static const struct eth_dev_ops virtio_eth_dev_ops = {
 	.xstats_reset            = virtio_dev_stats_reset,
 	.link_update             = virtio_dev_link_update,
 	.rx_queue_setup          = virtio_dev_rx_queue_setup,
-	.rx_queue_release        = virtio_dev_rx_queue_release,
+	.rx_queue_release        = virtio_dev_queue_release,
 	.tx_queue_setup          = virtio_dev_tx_queue_setup,
-	.tx_queue_release        = virtio_dev_tx_queue_release,
+	.tx_queue_release        = virtio_dev_queue_release,
 	/* collect stats per queue */
 	.queue_stats_mapping_set = virtio_dev_queue_stats_mapping_set,
 	.vlan_filter_set         = virtio_vlan_filter_set,
@@ -1141,6 +1162,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 	struct virtio_net_config *config;
 	struct virtio_net_config local_config;
 	struct rte_pci_device *pci_dev = eth_dev->pci_dev;
+	int ret;
 
 	/* Reset the device although not necessary at startup */
 	vtpci_reset(hw);
@@ -1222,6 +1244,10 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 		hw->max_queue_pairs = 1;
 	}
 
+	ret = virtio_alloc_queues(eth_dev);
+	if (ret < 0)
+		return ret;
+
 	if (pci_dev)
 		PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
 			eth_dev->data->port_id, pci_dev->id.vendor_id,
@@ -1390,15 +1416,9 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
-	/* Setup and start control queue */
-	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
-		ret = virtio_dev_cq_queue_setup(dev,
-			hw->max_queue_pairs * 2,
-			SOCKET_ID_ANY);
-		if (ret < 0)
-			return ret;
+	/* start control queue */
+	if (vtpci_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
 		virtio_dev_cq_start(dev);
-	}
 
 	hw->vlan_strip = rxmode->hw_vlan_strip;
 
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index de33b32..8a3fa6d 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -80,29 +80,15 @@ void virtio_dev_cq_start(struct rte_eth_dev *dev);
  */
 void virtio_dev_rxtx_start(struct rte_eth_dev *dev);
 
-int virtio_dev_queue_setup(struct rte_eth_dev *dev,
-			int queue_type,
-			uint16_t queue_idx,
-			uint16_t vtpci_queue_idx,
-			uint16_t nb_desc,
-			unsigned int socket_id,
-			void **pvq);
-
-void virtio_dev_queue_release(struct virtqueue *vq);
-
 int  virtio_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		uint16_t nb_rx_desc, unsigned int socket_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-void virtio_dev_rx_queue_release(void *rxq);
-
 int  virtio_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-void virtio_dev_tx_queue_release(void *txq);
-
 uint16_t virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts);
 
diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
index 0c5ed31..f63f76c 100644
--- a/drivers/net/virtio/virtio_pci.h
+++ b/drivers/net/virtio/virtio_pci.h
@@ -264,6 +264,8 @@ struct virtio_hw {
 	struct virtio_net_config *dev_cfg;
 	const struct virtio_pci_ops *vtpci_ops;
 	void	    *virtio_user_dev;
+
+	struct virtqueue **vqs;
 };
 
 /*
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index b4c4aa4..6e7ff27 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -530,24 +530,24 @@ int
 virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			uint16_t queue_idx,
 			uint16_t nb_desc,
-			unsigned int socket_id,
+			unsigned int socket_id __rte_unused,
 			__rte_unused const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp)
 {
 	uint16_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_RQ_QUEUE_IDX;
+	struct virtio_hw *hw = dev->data->dev_private;
+	struct virtqueue *vq = hw->vqs[vtpci_queue_idx];
 	struct virtnet_rx *rxvq;
-	int ret;
 
 	PMD_INIT_FUNC_TRACE();
-	ret = virtio_dev_queue_setup(dev, VTNET_RQ, queue_idx, vtpci_queue_idx,
-			nb_desc, socket_id, (void **)&rxvq);
-	if (ret < 0) {
-		PMD_INIT_LOG(ERR, "rvq initialization failed");
-		return ret;
-	}
 
-	/* Create mempool for rx mbuf allocation */
+	if (nb_desc == 0 || nb_desc > vq->vq_nentries)
+		nb_desc = vq->vq_nentries;
+	vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
+
+	rxvq = &vq->rxq;
 	rxvq->mpool = mp;
+	rxvq->queue_id = queue_idx;
 
 	dev->data->rx_queues[queue_idx] = rxvq;
 
@@ -556,27 +556,6 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	return 0;
 }
 
-void
-virtio_dev_rx_queue_release(void *rxq)
-{
-	struct virtnet_rx *rxvq = rxq;
-	struct virtqueue *vq;
-	const struct rte_memzone *mz;
-
-	if (rxvq == NULL)
-		return;
-
-	/*
-	 * rxvq is freed when vq is freed, and as mz should be freed after the
-	 * del_queue, so we reserve the mz pointer first.
-	 */
-	vq = rxvq->vq;
-	mz = rxvq->mz;
-
-	virtio_dev_queue_release(vq);
-	rte_memzone_free(mz);
-}
-
 static void
 virtio_update_rxtx_handler(struct rte_eth_dev *dev,
 			   const struct rte_eth_txconf *tx_conf)
@@ -613,27 +592,25 @@ int
 virtio_dev_tx_queue_setup(struct rte_eth_dev *dev,
 			uint16_t queue_idx,
 			uint16_t nb_desc,
-			unsigned int socket_id,
+			unsigned int socket_id __rte_unused,
 			const struct rte_eth_txconf *tx_conf)
 {
 	uint8_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_TQ_QUEUE_IDX;
+	struct virtio_hw *hw = dev->data->dev_private;
+	struct virtqueue *vq = hw->vqs[vtpci_queue_idx];
 	struct virtnet_tx *txvq;
-	struct virtqueue *vq;
 	uint16_t tx_free_thresh;
-	int ret;
 
 	PMD_INIT_FUNC_TRACE();
 
-
 	virtio_update_rxtx_handler(dev, tx_conf);
 
-	ret = virtio_dev_queue_setup(dev, VTNET_TQ, queue_idx, vtpci_queue_idx,
-			nb_desc, socket_id, (void **)&txvq);
-	if (ret < 0) {
-		PMD_INIT_LOG(ERR, "tvq initialization failed");
-		return ret;
-	}
-	vq = txvq->vq;
+	if (nb_desc == 0 || nb_desc > vq->vq_nentries)
+		nb_desc = vq->vq_nentries;
+	vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
+
+	txvq = &vq->txq;
+	txvq->queue_id = queue_idx;
 
 	tx_free_thresh = tx_conf->tx_free_thresh;
 	if (tx_free_thresh == 0)
@@ -655,30 +632,6 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	return 0;
 }
 
-void
-virtio_dev_tx_queue_release(void *txq)
-{
-	struct virtnet_tx *txvq = txq;
-	struct virtqueue *vq;
-	const struct rte_memzone *mz;
-	const struct rte_memzone *hdr_mz;
-
-	if (txvq == NULL)
-		return;
-
-	/*
-	 * txvq is freed when vq is freed, and as mz should be freed after the
-	 * del_queue, so we reserve the mz pointer first.
-	 */
-	vq = txvq->vq;
-	mz = txvq->mz;
-	hdr_mz = txvq->virtio_net_hdr_mz;
-
-	virtio_dev_queue_release(vq);
-	rte_memzone_free(mz);
-	rte_memzone_free(hdr_mz);
-}
-
 static void
 virtio_discard_rxbuf(struct virtqueue *vq, struct rte_mbuf *m)
 {
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index bbeb2f2..f0bb089 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -211,7 +211,6 @@ struct virtqueue {
 	uint16_t  vq_queue_index;   /**< PCI queue index */
 	uint16_t offset; /**< relative offset to obtain addr in mbuf */
 	uint16_t  *notify_addr;
-	int configured;
 	struct rte_mbuf **sw_ring;  /**< RX software ring. */
 	struct vq_desc_extra vq_descx[0];
 };
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH v2 05/10] net/virtio: initiate vring at init stage
  2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
                     ` (3 preceding siblings ...)
  2016-11-05  9:40   ` [dpdk-dev] [PATCH v2 04/10] net/virtio: allocate queue at init stage Yuanhan Liu
@ 2016-11-05  9:41   ` Yuanhan Liu
  2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 06/10] net/virtio: move queue configure code to proper place Yuanhan Liu
                     ` (6 subsequent siblings)
  11 siblings, 0 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-05  9:41 UTC (permalink / raw)
  To: dev
  Cc: Thomas Monjalon, Tan Jianfeng, Kevin Traynor, Ilya Maximets,
	Kyle Larose, Maxime Coquelin, Yuanhan Liu

virtio_dev_vring_start() is actually doing the vring initiation job.
And the vring initiation job should be done at the dev init stage, as
stated with great details in former commit.

So move it there, and rename it to virtio_init_vring().

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 drivers/net/virtio/virtio_ethdev.c | 32 +++++++++++++++++++++++++++++++-
 drivers/net/virtio/virtio_rxtx.c   | 32 --------------------------------
 2 files changed, 31 insertions(+), 33 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 67a175d..82dcc97 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -308,6 +308,35 @@ virtio_get_nr_vq(struct virtio_hw *hw)
 	return nr_vq;
 }
 
+static void
+virtio_init_vring(struct virtqueue *vq)
+{
+	int size = vq->vq_nentries;
+	struct vring *vr = &vq->vq_ring;
+	uint8_t *ring_mem = vq->vq_ring_virt_mem;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/*
+	 * Reinitialise since virtio port might have been stopped and restarted
+	 */
+	memset(ring_mem, 0, vq->vq_ring_size);
+	vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
+	vq->vq_used_cons_idx = 0;
+	vq->vq_desc_head_idx = 0;
+	vq->vq_avail_idx = 0;
+	vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
+	vq->vq_free_cnt = vq->vq_nentries;
+	memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
+
+	vring_desc_init(vr->desc, size);
+
+	/*
+	 * Disable device(host) interrupting guest
+	 */
+	virtqueue_disable_intr(vq);
+}
+
 static int
 virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
 {
@@ -371,7 +400,6 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
 	vq->hw = hw;
 	vq->vq_queue_index = vtpci_queue_idx;
 	vq->vq_nentries = vq_size;
-	vq->vq_free_cnt = vq_size;
 
 	/*
 	 * Reserve a memzone for vring elements
@@ -402,6 +430,8 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
 	PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: 0x%" PRIx64,
 		     (uint64_t)(uintptr_t)mz->addr);
 
+	virtio_init_vring(vq);
+
 	if (sz_hdr_mz) {
 		snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
 			 dev->data->port_id, vtpci_queue_idx);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 6e7ff27..24129d6 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -378,42 +378,12 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
 	vq_update_avail_ring(vq, head_idx);
 }
 
-static void
-virtio_dev_vring_start(struct virtqueue *vq)
-{
-	int size = vq->vq_nentries;
-	struct vring *vr = &vq->vq_ring;
-	uint8_t *ring_mem = vq->vq_ring_virt_mem;
-
-	PMD_INIT_FUNC_TRACE();
-
-	/*
-	 * Reinitialise since virtio port might have been stopped and restarted
-	 */
-	memset(vq->vq_ring_virt_mem, 0, vq->vq_ring_size);
-	vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
-	vq->vq_used_cons_idx = 0;
-	vq->vq_desc_head_idx = 0;
-	vq->vq_avail_idx = 0;
-	vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
-	vq->vq_free_cnt = vq->vq_nentries;
-	memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
-
-	vring_desc_init(vr->desc, size);
-
-	/*
-	 * Disable device(host) interrupting guest
-	 */
-	virtqueue_disable_intr(vq);
-}
-
 void
 virtio_dev_cq_start(struct rte_eth_dev *dev)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
 
 	if (hw->cvq && hw->cvq->vq) {
-		virtio_dev_vring_start(hw->cvq->vq);
 		VIRTQUEUE_DUMP((struct virtqueue *)hw->cvq->vq);
 	}
 }
@@ -441,7 +411,6 @@ virtio_dev_rxtx_start(struct rte_eth_dev *dev)
 		int error, nbufs;
 		struct rte_mbuf *m;
 
-		virtio_dev_vring_start(vq);
 		if (rxvq->mpool == NULL) {
 			rte_exit(EXIT_FAILURE,
 				"Cannot allocate mbufs for rx virtqueue");
@@ -499,7 +468,6 @@ virtio_dev_rxtx_start(struct rte_eth_dev *dev)
 		struct virtnet_tx *txvq = dev->data->tx_queues[i];
 		struct virtqueue *vq = txvq->vq;
 
-		virtio_dev_vring_start(vq);
 		if (hw->use_simple_rxtx) {
 			uint16_t mid_idx  = vq->vq_nentries >> 1;
 
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH v2 06/10] net/virtio: move queue configure code to proper place
  2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
                     ` (4 preceding siblings ...)
  2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 05/10] net/virtio: initiate vring " Yuanhan Liu
@ 2016-11-05  9:41   ` Yuanhan Liu
  2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 07/10] net/virtio: complete init stage at the right place Yuanhan Liu
                     ` (5 subsequent siblings)
  11 siblings, 0 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-05  9:41 UTC (permalink / raw)
  To: dev
  Cc: Thomas Monjalon, Tan Jianfeng, Kevin Traynor, Ilya Maximets,
	Kyle Larose, Maxime Coquelin, Yuanhan Liu

The only piece of code of virtio_dev_rxtx_start() is actually doing
queue configure/setup work. So, move it to corresponding queue_setup
callback.

Once that is done, virtio_dev_rxtx_start() becomes an empty function,
thus it's being removed.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 drivers/net/virtio/virtio_ethdev.c |   2 -
 drivers/net/virtio/virtio_ethdev.h |   2 -
 drivers/net/virtio/virtio_rxtx.c   | 186 ++++++++++++++++---------------------
 3 files changed, 78 insertions(+), 112 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 82dcc97..a3e2aa9 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1497,8 +1497,6 @@ virtio_dev_start(struct rte_eth_dev *dev)
 	if (hw->started)
 		return 0;
 
-	/* Do final configuration before rx/tx engine starts */
-	virtio_dev_rxtx_start(dev);
 	vtpci_reinit_complete(hw);
 
 	hw->started = 1;
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index 8a3fa6d..27d9a19 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -78,8 +78,6 @@ void virtio_dev_cq_start(struct rte_eth_dev *dev);
 /*
  * RX/TX function prototypes
  */
-void virtio_dev_rxtx_start(struct rte_eth_dev *dev);
-
 int  virtio_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		uint16_t nb_rx_desc, unsigned int socket_id,
 		const struct rte_eth_rxconf *rx_conf,
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 24129d6..22d97a4 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -388,112 +388,6 @@ virtio_dev_cq_start(struct rte_eth_dev *dev)
 	}
 }
 
-void
-virtio_dev_rxtx_start(struct rte_eth_dev *dev)
-{
-	/*
-	 * Start receive and transmit vrings
-	 * -	Setup vring structure for all queues
-	 * -	Initialize descriptor for the rx vring
-	 * -	Allocate blank mbufs for the each rx descriptor
-	 *
-	 */
-	uint16_t i;
-	uint16_t desc_idx;
-	struct virtio_hw *hw = dev->data->dev_private;
-
-	PMD_INIT_FUNC_TRACE();
-
-	/* Start rx vring. */
-	for (i = 0; i < dev->data->nb_rx_queues; i++) {
-		struct virtnet_rx *rxvq = dev->data->rx_queues[i];
-		struct virtqueue *vq = rxvq->vq;
-		int error, nbufs;
-		struct rte_mbuf *m;
-
-		if (rxvq->mpool == NULL) {
-			rte_exit(EXIT_FAILURE,
-				"Cannot allocate mbufs for rx virtqueue");
-		}
-
-		/* Allocate blank mbufs for the each rx descriptor */
-		nbufs = 0;
-		error = ENOSPC;
-
-		if (hw->use_simple_rxtx) {
-			for (desc_idx = 0; desc_idx < vq->vq_nentries;
-			     desc_idx++) {
-				vq->vq_ring.avail->ring[desc_idx] = desc_idx;
-				vq->vq_ring.desc[desc_idx].flags =
-					VRING_DESC_F_WRITE;
-			}
-		}
-
-		memset(&rxvq->fake_mbuf, 0, sizeof(rxvq->fake_mbuf));
-		for (desc_idx = 0; desc_idx < RTE_PMD_VIRTIO_RX_MAX_BURST;
-		     desc_idx++) {
-			vq->sw_ring[vq->vq_nentries + desc_idx] =
-				&rxvq->fake_mbuf;
-		}
-
-		while (!virtqueue_full(vq)) {
-			m = rte_mbuf_raw_alloc(rxvq->mpool);
-			if (m == NULL)
-				break;
-
-			/******************************************
-			*         Enqueue allocated buffers        *
-			*******************************************/
-			if (hw->use_simple_rxtx)
-				error = virtqueue_enqueue_recv_refill_simple(vq, m);
-			else
-				error = virtqueue_enqueue_recv_refill(vq, m);
-
-			if (error) {
-				rte_pktmbuf_free(m);
-				break;
-			}
-			nbufs++;
-		}
-
-		vq_update_avail_idx(vq);
-
-		PMD_INIT_LOG(DEBUG, "Allocated %d bufs", nbufs);
-
-		VIRTQUEUE_DUMP(vq);
-	}
-
-	/* Start tx vring. */
-	for (i = 0; i < dev->data->nb_tx_queues; i++) {
-		struct virtnet_tx *txvq = dev->data->tx_queues[i];
-		struct virtqueue *vq = txvq->vq;
-
-		if (hw->use_simple_rxtx) {
-			uint16_t mid_idx  = vq->vq_nentries >> 1;
-
-			for (desc_idx = 0; desc_idx < mid_idx; desc_idx++) {
-				vq->vq_ring.avail->ring[desc_idx] =
-					desc_idx + mid_idx;
-				vq->vq_ring.desc[desc_idx + mid_idx].next =
-					desc_idx;
-				vq->vq_ring.desc[desc_idx + mid_idx].addr =
-					txvq->virtio_net_hdr_mem +
-					offsetof(struct virtio_tx_region, tx_hdr);
-				vq->vq_ring.desc[desc_idx + mid_idx].len =
-					vq->hw->vtnet_hdr_size;
-				vq->vq_ring.desc[desc_idx + mid_idx].flags =
-					VRING_DESC_F_NEXT;
-				vq->vq_ring.desc[desc_idx].flags = 0;
-			}
-			for (desc_idx = mid_idx; desc_idx < vq->vq_nentries;
-			     desc_idx++)
-				vq->vq_ring.avail->ring[desc_idx] = desc_idx;
-		}
-
-		VIRTQUEUE_DUMP(vq);
-	}
-}
-
 int
 virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			uint16_t queue_idx,
@@ -506,6 +400,9 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	struct virtio_hw *hw = dev->data->dev_private;
 	struct virtqueue *vq = hw->vqs[vtpci_queue_idx];
 	struct virtnet_rx *rxvq;
+	int error, nbufs;
+	struct rte_mbuf *m;
+	uint16_t desc_idx;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -514,13 +411,61 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc);
 
 	rxvq = &vq->rxq;
-	rxvq->mpool = mp;
 	rxvq->queue_id = queue_idx;
-
+	rxvq->mpool = mp;
+	if (rxvq->mpool == NULL) {
+		rte_exit(EXIT_FAILURE,
+			"Cannot allocate mbufs for rx virtqueue");
+	}
 	dev->data->rx_queues[queue_idx] = rxvq;
 
+
+	/* Allocate blank mbufs for the each rx descriptor */
+	nbufs = 0;
+	error = ENOSPC;
+
+	if (hw->use_simple_rxtx) {
+		for (desc_idx = 0; desc_idx < vq->vq_nentries;
+		     desc_idx++) {
+			vq->vq_ring.avail->ring[desc_idx] = desc_idx;
+			vq->vq_ring.desc[desc_idx].flags =
+				VRING_DESC_F_WRITE;
+		}
+	}
+
+	memset(&rxvq->fake_mbuf, 0, sizeof(rxvq->fake_mbuf));
+	for (desc_idx = 0; desc_idx < RTE_PMD_VIRTIO_RX_MAX_BURST;
+	     desc_idx++) {
+		vq->sw_ring[vq->vq_nentries + desc_idx] =
+			&rxvq->fake_mbuf;
+	}
+
+	while (!virtqueue_full(vq)) {
+		m = rte_mbuf_raw_alloc(rxvq->mpool);
+		if (m == NULL)
+			break;
+
+		/* Enqueue allocated buffers */
+		if (hw->use_simple_rxtx)
+			error = virtqueue_enqueue_recv_refill_simple(vq, m);
+		else
+			error = virtqueue_enqueue_recv_refill(vq, m);
+
+		if (error) {
+			rte_pktmbuf_free(m);
+			break;
+		}
+		nbufs++;
+	}
+
+	vq_update_avail_idx(vq);
+
+	PMD_INIT_LOG(DEBUG, "Allocated %d bufs", nbufs);
+
 	virtio_rxq_vec_setup(rxvq);
 
+	VIRTQUEUE_DUMP(vq);
+
 	return 0;
 }
 
@@ -568,6 +513,7 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	struct virtqueue *vq = hw->vqs[vtpci_queue_idx];
 	struct virtnet_tx *txvq;
 	uint16_t tx_free_thresh;
+	uint16_t desc_idx;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -596,6 +542,30 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev,
 
 	vq->vq_free_thresh = tx_free_thresh;
 
+	if (hw->use_simple_rxtx) {
+		uint16_t mid_idx  = vq->vq_nentries >> 1;
+
+		for (desc_idx = 0; desc_idx < mid_idx; desc_idx++) {
+			vq->vq_ring.avail->ring[desc_idx] =
+				desc_idx + mid_idx;
+			vq->vq_ring.desc[desc_idx + mid_idx].next =
+				desc_idx;
+			vq->vq_ring.desc[desc_idx + mid_idx].addr =
+				txvq->virtio_net_hdr_mem +
+				offsetof(struct virtio_tx_region, tx_hdr);
+			vq->vq_ring.desc[desc_idx + mid_idx].len =
+				vq->hw->vtnet_hdr_size;
+			vq->vq_ring.desc[desc_idx + mid_idx].flags =
+				VRING_DESC_F_NEXT;
+			vq->vq_ring.desc[desc_idx].flags = 0;
+		}
+		for (desc_idx = mid_idx; desc_idx < vq->vq_nentries;
+		     desc_idx++)
+			vq->vq_ring.avail->ring[desc_idx] = desc_idx;
+	}
+
+	VIRTQUEUE_DUMP(vq);
+
 	dev->data->tx_queues[queue_idx] = txvq;
 	return 0;
 }
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH v2 07/10] net/virtio: complete init stage at the right place
  2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
                     ` (5 preceding siblings ...)
  2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 06/10] net/virtio: move queue configure code to proper place Yuanhan Liu
@ 2016-11-05  9:41   ` Yuanhan Liu
  2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 08/10] net/virtio: remove started field Yuanhan Liu
                     ` (4 subsequent siblings)
  11 siblings, 0 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-05  9:41 UTC (permalink / raw)
  To: dev
  Cc: Thomas Monjalon, Tan Jianfeng, Kevin Traynor, Ilya Maximets,
	Kyle Larose, Maxime Coquelin, Yuanhan Liu

Invoking vtpci_reinit_complete() at port start stage doesn't make any
sense, instead, it should be done at the end of dev init stage.

So move it here.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 drivers/net/virtio/virtio_ethdev.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index a3e2aa9..15c48b9 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1277,6 +1277,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 	ret = virtio_alloc_queues(eth_dev);
 	if (ret < 0)
 		return ret;
+	vtpci_reinit_complete(hw);
 
 	if (pci_dev)
 		PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
@@ -1497,8 +1498,6 @@ virtio_dev_start(struct rte_eth_dev *dev)
 	if (hw->started)
 		return 0;
 
-	vtpci_reinit_complete(hw);
-
 	hw->started = 1;
 
 	/*Notify the backend
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH v2 08/10] net/virtio: remove started field
  2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
                     ` (6 preceding siblings ...)
  2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 07/10] net/virtio: complete init stage at the right place Yuanhan Liu
@ 2016-11-05  9:41   ` Yuanhan Liu
  2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 09/10] net/virtio: fix less queues being enabled issue Yuanhan Liu
                     ` (3 subsequent siblings)
  11 siblings, 0 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-05  9:41 UTC (permalink / raw)
  To: dev
  Cc: Thomas Monjalon, Tan Jianfeng, Kevin Traynor, Ilya Maximets,
	Kyle Larose, Maxime Coquelin, Yuanhan Liu

The "hw->started" field was introduced to stop touching queues
on restart. We never touches queues on restart any more, thus
it's safe to remove this flag.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 drivers/net/virtio/virtio_ethdev.c | 15 ++-------------
 drivers/net/virtio/virtio_pci.h    |  1 -
 2 files changed, 2 insertions(+), 14 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 15c48b9..5ec3e0e 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -602,7 +602,6 @@ virtio_dev_close(struct rte_eth_dev *dev)
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
 		vtpci_irq_config(hw, VIRTIO_MSI_NO_VECTOR);
 	vtpci_reset(hw);
-	hw->started = 0;
 	virtio_dev_free_mbufs(dev);
 	virtio_free_queues(hw);
 }
@@ -1345,17 +1344,14 @@ static int
 eth_virtio_dev_uninit(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev;
-	struct virtio_hw *hw = eth_dev->data->dev_private;
 
 	PMD_INIT_FUNC_TRACE();
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
 		return -EPERM;
 
-	if (hw->started == 1) {
-		virtio_dev_stop(eth_dev);
-		virtio_dev_close(eth_dev);
-	}
+	virtio_dev_stop(eth_dev);
+	virtio_dev_close(eth_dev);
 	pci_dev = eth_dev->pci_dev;
 
 	eth_dev->dev_ops = NULL;
@@ -1474,7 +1470,6 @@ static int
 virtio_dev_start(struct rte_eth_dev *dev)
 {
 	uint16_t nb_queues, i;
-	struct virtio_hw *hw = dev->data->dev_private;
 	struct virtnet_rx *rxvq;
 	struct virtnet_tx *txvq __rte_unused;
 
@@ -1494,12 +1489,6 @@ virtio_dev_start(struct rte_eth_dev *dev)
 	/* Initialize Link state */
 	virtio_dev_link_update(dev, 0);
 
-	/* On restart after stop do not touch queues */
-	if (hw->started)
-		return 0;
-
-	hw->started = 1;
-
 	/*Notify the backend
 	 *Otherwise the tap backend might already stop its queue due to fullness.
 	 *vhost backend will have no chance to be waked up
diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
index f63f76c..de271bf 100644
--- a/drivers/net/virtio/virtio_pci.h
+++ b/drivers/net/virtio/virtio_pci.h
@@ -252,7 +252,6 @@ struct virtio_hw {
 	uint16_t    vtnet_hdr_size;
 	uint8_t	    vlan_strip;
 	uint8_t	    use_msix;
-	uint8_t     started;
 	uint8_t     modern;
 	uint8_t     use_simple_rxtx;
 	uint8_t     mac_addr[ETHER_ADDR_LEN];
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH v2 09/10] net/virtio: fix less queues being enabled issue
  2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
                     ` (7 preceding siblings ...)
  2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 08/10] net/virtio: remove started field Yuanhan Liu
@ 2016-11-05  9:41   ` Yuanhan Liu
  2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 10/10] net/virtio: fix multiple queue enabling Yuanhan Liu
                     ` (2 subsequent siblings)
  11 siblings, 0 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-05  9:41 UTC (permalink / raw)
  To: dev
  Cc: Thomas Monjalon, Tan Jianfeng, Kevin Traynor, Ilya Maximets,
	Kyle Larose, Maxime Coquelin, Yuanhan Liu

>From the virtio spec of view, multiple-queue is always enabled/disabled
in queue pairs. DPDK somehow allows the case when Tx and Rx queue number
are different.

Currently, virtio PMD get the queue pair number from the nb_rx_queues
field, which could be an issue when Tx queue number > Rx queue number.
Say, 2 Tx queues and 1 Rx queues. This would end up with 1 quues being
enabled. Which is wrong.

The fix is straightforward. Just pick a bigger number and enable that many
of queues.

Fixes: 823ad647950a ("virtio: support multiple queues")

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_ethdev.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 5ec3e0e..d70bd00 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1493,7 +1493,7 @@ virtio_dev_start(struct rte_eth_dev *dev)
 	 *Otherwise the tap backend might already stop its queue due to fullness.
 	 *vhost backend will have no chance to be waked up
 	 */
-	nb_queues = dev->data->nb_rx_queues;
+	nb_queues = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
 	if (nb_queues > 1) {
 		if (virtio_set_multiple_queues(dev, nb_queues) != 0)
 			return -EINVAL;
@@ -1501,7 +1501,7 @@ virtio_dev_start(struct rte_eth_dev *dev)
 
 	PMD_INIT_LOG(DEBUG, "nb_queues=%d", nb_queues);
 
-	for (i = 0; i < nb_queues; i++) {
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxvq = dev->data->rx_queues[i];
 		virtqueue_notify(rxvq->vq);
 	}
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH v2 10/10] net/virtio: fix multiple queue enabling
  2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
                     ` (8 preceding siblings ...)
  2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 09/10] net/virtio: fix less queues being enabled issue Yuanhan Liu
@ 2016-11-05  9:41   ` Yuanhan Liu
  2016-11-07  9:25     ` Yuanhan Liu
  2016-11-07 14:44   ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Thomas Monjalon
  2016-11-07 15:05   ` Yao, Lei A
  11 siblings, 1 reply; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-05  9:41 UTC (permalink / raw)
  To: dev
  Cc: Thomas Monjalon, Tan Jianfeng, Kevin Traynor, Ilya Maximets,
	Kyle Larose, Maxime Coquelin, Yuanhan Liu

When queue number shrinks to 1 from X, the following code stops us
sending the multiple queue ctrl message:

        if (nb_queues > 1) {
                if (virtio_set_multiple_queues(dev, nb_queues) != 0)
                        return -EINVAL;
        }

This ends up with still X queues being enabled, which is obviously
wrong. Fix it by removing the check.

Fixes: 823ad647950a ("virtio: support multiple queues")

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 drivers/net/virtio/virtio_ethdev.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index d70bd00..a4cd6f5 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1494,10 +1494,8 @@ virtio_dev_start(struct rte_eth_dev *dev)
 	 *vhost backend will have no chance to be waked up
 	 */
 	nb_queues = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
-	if (nb_queues > 1) {
-		if (virtio_set_multiple_queues(dev, nb_queues) != 0)
-			return -EINVAL;
-	}
+	if (virtio_set_multiple_queues(dev, nb_queues) != 0)
+		return -EINVAL;
 
 	PMD_INIT_LOG(DEBUG, "nb_queues=%d", nb_queues);
 
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH v2 10/10] net/virtio: fix multiple queue enabling
  2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 10/10] net/virtio: fix multiple queue enabling Yuanhan Liu
@ 2016-11-07  9:25     ` Yuanhan Liu
  0 siblings, 0 replies; 43+ messages in thread
From: Yuanhan Liu @ 2016-11-07  9:25 UTC (permalink / raw)
  To: dev
  Cc: Thomas Monjalon, Tan Jianfeng, Kevin Traynor, Ilya Maximets,
	Kyle Larose, Maxime Coquelin

On Sat, Nov 05, 2016 at 05:41:05PM +0800, Yuanhan Liu wrote:
> When queue number shrinks to 1 from X, the following code stops us
> sending the multiple queue ctrl message:
> 
>         if (nb_queues > 1) {
>                 if (virtio_set_multiple_queues(dev, nb_queues) != 0)
>                         return -EINVAL;
>         }
> 
> This ends up with still X queues being enabled, which is obviously
> wrong. Fix it by removing the check.
> 
> Fixes: 823ad647950a ("virtio: support multiple queues")
> 
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>

This breaks the virtio-user case, where ctrl-queue is not enabled by
defeault.

Here is an update patch would fix this.

	--yliu

---
>From 22502943764a99b1398d40f0110f8ce28323323a Mon Sep 17 00:00:00 2001
From: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Date: Sat, 5 Nov 2016 16:53:27 +0800
Subject: [PATCH v3 10/10] net/virtio: fix multiple queue enabling

When queue number shrinks to 1 from X, the following code stops us
sending the multiple queue ctrl message:

        if (nb_queues > 1) {
                if (virtio_set_multiple_queues(dev, nb_queues) != 0)
                        return -EINVAL;
        }

This ends up with still X queues being enabled, which is obviously
wrong. Fix it by replacing the check with a multiple queue enabled
or not check.

Fixes: 823ad647950a ("virtio: support multiple queues")

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
v3: - fix the virtio-user case, which is default with ctrl-queue
      being disabled. Thus virtio_set_multiple_queues fails.
---
 drivers/net/virtio/virtio_ethdev.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index d70bd00..18da98f 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1472,6 +1472,7 @@ virtio_dev_start(struct rte_eth_dev *dev)
 	uint16_t nb_queues, i;
 	struct virtnet_rx *rxvq;
 	struct virtnet_tx *txvq __rte_unused;
+	struct virtio_hw *hw = dev->data->dev_private;
 
 	/* check if lsc interrupt feature is enabled */
 	if (dev->data->dev_conf.intr_conf.lsc) {
@@ -1494,7 +1495,7 @@ virtio_dev_start(struct rte_eth_dev *dev)
 	 *vhost backend will have no chance to be waked up
 	 */
 	nb_queues = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
-	if (nb_queues > 1) {
+	if (hw->max_queue_pairs > 1) {
 		if (virtio_set_multiple_queues(dev, nb_queues) != 0)
 			return -EINVAL;
 	}
-- 
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH v2 04/10] net/virtio: allocate queue at init stage
  2016-11-05  9:40   ` [dpdk-dev] [PATCH v2 04/10] net/virtio: allocate queue at init stage Yuanhan Liu
@ 2016-11-07 14:23     ` Thomas Monjalon
  0 siblings, 0 replies; 43+ messages in thread
From: Thomas Monjalon @ 2016-11-07 14:23 UTC (permalink / raw)
  To: Yuanhan Liu
  Cc: dev, Tan Jianfeng, Kevin Traynor, Ilya Maximets, Kyle Larose,
	Maxime Coquelin

2016-11-05 17:40, Yuanhan Liu:
> -int virtio_dev_queue_setup(struct rte_eth_dev *dev,
> -			int queue_type,
> -			uint16_t queue_idx,
> -			uint16_t vtpci_queue_idx,
> -			uint16_t nb_desc,
> -			unsigned int socket_id,
> -			void **pvq)
> +static int
> +virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
>  {

I've fixed an error with debug enabled:
drivers/net/virtio/virtio_ethdev.c:335:57: error:
  ‘nb_desc’ undeclared (first use in this function)
  PMD_INIT_LOG(DEBUG, "vq_size: %u nb_desc:%u", vq_size, nb_desc);
                                                         ^

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue
  2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
                     ` (9 preceding siblings ...)
  2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 10/10] net/virtio: fix multiple queue enabling Yuanhan Liu
@ 2016-11-07 14:44   ` Thomas Monjalon
  2016-11-07 15:05   ` Yao, Lei A
  11 siblings, 0 replies; 43+ messages in thread
From: Thomas Monjalon @ 2016-11-07 14:44 UTC (permalink / raw)
  To: Yuanhan Liu
  Cc: dev, Tan Jianfeng, Kevin Traynor, Ilya Maximets, Kyle Larose,
	Maxime Coquelin

2016-11-05 17:40, Yuanhan Liu:
> This patchset fixes few issues related to virtio queue reconfigure:
> increase or shrink the queue number. The major issue and the reason
> behind is described with length details in patch 4 "net/virtio: allocate
> queue at init stage".
> 
> Those bugs can not be fixed by few lines of code, it's because the
> current driver init logic is quite wrong, that I need change quite many
> places to make it right. Meanwhile, I have already done my best to keep 
> the changes being as minimal as possible, so that we could have fewer
> changes to break something else; also, it's would be easier for review.

Applied with a small fix and v3 of last patch, thanks

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue
  2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
                     ` (10 preceding siblings ...)
  2016-11-07 14:44   ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Thomas Monjalon
@ 2016-11-07 15:05   ` Yao, Lei A
  11 siblings, 0 replies; 43+ messages in thread
From: Yao, Lei A @ 2016-11-07 15:05 UTC (permalink / raw)
  To: Yuanhan Liu, dev
  Cc: Thomas Monjalon, Tan, Jianfeng, Kevin Traynor, Ilya Maximets,
	Kyle Larose, Maxime Coquelin

Tested-by: Lei Yao <lei.a.yao@intel.com>
- Apply patch to v16.11-rc2
- Compile: Pass
- OS: Ubuntu16.04 4.4.0-45-generic
- GCC: 5.4.0

Most of the basic Virtio related test cases are tested with this patch. No function issue found and no obvious performance drop. The following is the pass case list:
TC1:  vhost/virtio PVP vector performance      
TC2:  vhost/virtio PVP normal path performance 
TC3:  vhost/virtio PVP mergeable path performance 
TC7: vhost/virtio-net PVP ipv4 fwd normal path performance
TC8: vhost/virtio-net PVP ipv4 fwd mergeable path performance
TC9: vhost/virtio-net VM2VM iperf with TSO enabled performance
TC11: vhost/virtio-pmd PVP with 2q 2c vector performance
TC12: vhost/virtio-pmd PVP with 2q 1c vector performance
TC16: vhost/virtio1.0 PVP normal performance
TC17: vhost/virtio 1.0 PVP mergeable performance
TC18: vhost/virtio 1.0 PVP vector performance(should be same as normal)
TC19: dpdk vhost + virtio-pmd PVP vector performance
TC20: dpdk vhost + virtio-pmd PVP non-vector performance
TC21: dpdk vhost + virtio-pmd PVP mergeable performance
TC25: Test Vhost/virtio-pmd PVP vector performance with qemu2.5
TC26: Test Vhost/virtio-pmd PVP vector performance with qemu2.6
TC27: Test Vhost/virtio-pmd PVP vector performance with qemu2.7
test vhost-user reconnect with virtio-pmd
test virtio-pmd reconnect with vhost-user 
test vhost-user reconnect with multi virtio-pmd
multi test virtio-pmd reconnect with vhost-user

-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yuanhan Liu
Sent: Saturday, November 5, 2016 5:41 PM
To: dev@dpdk.org
Cc: Thomas Monjalon <thomas.monjalon@6wind.com>; Tan, Jianfeng <jianfeng.tan@intel.com>; Kevin Traynor <ktraynor@redhat.com>; Ilya Maximets <i.maximets@samsung.com>; Kyle Larose <klarose@sandvine.com>; Maxime Coquelin <maxime.coquelin@redhat.com>; Yuanhan Liu <yuanhan.liu@linux.intel.com>
Subject: [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue

This patchset fixes few issues related to virtio queue reconfigure:
increase or shrink the queue number. The major issue and the reason behind is described with length details in patch 4 "net/virtio: allocate queue at init stage".

Those bugs can not be fixed by few lines of code, it's because the current driver init logic is quite wrong, that I need change quite many places to make it right. Meanwhile, I have already done my best to keep the changes being as minimal as possible, so that we could have fewer changes to break something else; also, it's would be easier for review.

v2: - fix two more minor issues regarding to queue enabling; see patch 9
      and 10.
    - refined commit log a bit.

Thanks.

	--yliu

---
Yuanhan Liu (10):
  net/virtio: revert fix restart
  net/virtio: simplify queue memzone name
  net/virtio: simplify queue allocation
  net/virtio: allocate queue at init stage
  net/virtio: initiate vring at init stage
  net/virtio: move queue configure code to proper place
  net/virtio: complete init stage at the right place
  net/virtio: remove started field
  net/virtio: fix less queues being enabled issue
  net/virtio: fix multiple queue enabling

 drivers/net/virtio/virtio_ethdev.c | 248 +++++++++++++++++--------------  drivers/net/virtio/virtio_ethdev.h |  16 --
 drivers/net/virtio/virtio_pci.h    |   3 +-
 drivers/net/virtio/virtio_rxtx.c   | 291 ++++++++++++-------------------------
 drivers/net/virtio/virtqueue.h     |   7 +
 5 files changed, 237 insertions(+), 328 deletions(-)

--
1.9.0

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2016-11-07 15:05 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-03 16:09 [dpdk-dev] [PATCH 0/8] net/virtio: fix queue reconfigure issue Yuanhan Liu
2016-11-03 16:09 ` [dpdk-dev] [PATCH 1/8] net/virtio: revert "virtio: fix restart" Yuanhan Liu
2016-11-03 20:36   ` Maxime Coquelin
2016-11-04  2:00     ` Yuanhan Liu
2016-11-04  8:09       ` Maxime Coquelin
2016-11-04 14:28         ` Yuanhan Liu
2016-11-04  8:10   ` Maxime Coquelin
2016-11-03 16:09 ` [dpdk-dev] [PATCH 2/8] net/virtio: simplify queue memzone name Yuanhan Liu
2016-11-03 20:41   ` Maxime Coquelin
2016-11-03 16:09 ` [dpdk-dev] [PATCH 3/8] net/virtio: simplify queue allocation Yuanhan Liu
2016-11-03 20:48   ` Maxime Coquelin
2016-11-04  1:51     ` Yuanhan Liu
2016-11-03 16:09 ` [dpdk-dev] [PATCH 4/8] net/virtio: allocate queue at init stage Yuanhan Liu
2016-11-03 21:11   ` Maxime Coquelin
2016-11-04  1:50     ` Yuanhan Liu
2016-11-04  8:08       ` Maxime Coquelin
2016-11-04  8:25   ` Maxime Coquelin
2016-11-04 15:21   ` Kevin Traynor
2016-11-04 20:30     ` Kevin Traynor
2016-11-05  6:15       ` Yuanhan Liu
2016-11-03 16:09 ` [dpdk-dev] [PATCH 5/8] net/virtio: initiate vring " Yuanhan Liu
2016-11-04  8:34   ` Maxime Coquelin
2016-11-03 16:09 ` [dpdk-dev] [PATCH 6/8] net/virtio: move queue configure code to proper place Yuanhan Liu
2016-11-04  8:39   ` Maxime Coquelin
2016-11-03 16:09 ` [dpdk-dev] [PATCH 7/8] net/virtio: complete init stage at the right place Yuanhan Liu
2016-11-04  8:44   ` Maxime Coquelin
2016-11-03 16:10 ` [dpdk-dev] [PATCH 8/8] net/virtio: remove started field Yuanhan Liu
2016-11-04  8:46   ` Maxime Coquelin
2016-11-05  9:40 ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Yuanhan Liu
2016-11-05  9:40   ` [dpdk-dev] [PATCH v2 01/10] net/virtio: revert fix restart Yuanhan Liu
2016-11-05  9:40   ` [dpdk-dev] [PATCH v2 02/10] net/virtio: simplify queue memzone name Yuanhan Liu
2016-11-05  9:40   ` [dpdk-dev] [PATCH v2 03/10] net/virtio: simplify queue allocation Yuanhan Liu
2016-11-05  9:40   ` [dpdk-dev] [PATCH v2 04/10] net/virtio: allocate queue at init stage Yuanhan Liu
2016-11-07 14:23     ` Thomas Monjalon
2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 05/10] net/virtio: initiate vring " Yuanhan Liu
2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 06/10] net/virtio: move queue configure code to proper place Yuanhan Liu
2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 07/10] net/virtio: complete init stage at the right place Yuanhan Liu
2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 08/10] net/virtio: remove started field Yuanhan Liu
2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 09/10] net/virtio: fix less queues being enabled issue Yuanhan Liu
2016-11-05  9:41   ` [dpdk-dev] [PATCH v2 10/10] net/virtio: fix multiple queue enabling Yuanhan Liu
2016-11-07  9:25     ` Yuanhan Liu
2016-11-07 14:44   ` [dpdk-dev] [PATCH v2 00/10] net/virtio: fix queue reconfigure issue Thomas Monjalon
2016-11-07 15:05   ` Yao, Lei A

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).