DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/3] fix virtio_user issues
@ 2016-08-05 11:36 Jianfeng Tan
  2016-08-05 11:36 ` [dpdk-dev] [PATCH 1/3] net/virtio_user: fix queue pair not enabled Jianfeng Tan
                   ` (4 more replies)
  0 siblings, 5 replies; 26+ messages in thread
From: Jianfeng Tan @ 2016-08-05 11:36 UTC (permalink / raw)
  To: dev; +Cc: yuanhan.liu, zhihong.wang, lining18, Jianfeng Tan

Patch 1: fix issue when using virtio_user with OVS-DPDK.
Patch 2: fix issue when using virtio_user with VPP.
Patch 3: fix issue when failing to start virtio_user devices.

Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>

Jianfeng Tan (3):
  net/virtio_user: fix queue pair not enabled
  net/virtio_user: fix wrong sequence of messages
  net/virtio_user: fix dev not freed after init error

 drivers/net/virtio/virtio_user/virtio_user_dev.c | 115 ++++++++++++++---------
 drivers/net/virtio/virtio_user_ethdev.c          |  23 ++++-
 2 files changed, 92 insertions(+), 46 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [dpdk-dev] [PATCH 1/3] net/virtio_user: fix queue pair not enabled
  2016-08-05 11:36 [dpdk-dev] [PATCH 0/3] fix virtio_user issues Jianfeng Tan
@ 2016-08-05 11:36 ` Jianfeng Tan
  2016-09-06  6:30   ` Yuanhan Liu
  2016-08-05 11:36 ` [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages Jianfeng Tan
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 26+ messages in thread
From: Jianfeng Tan @ 2016-08-05 11:36 UTC (permalink / raw)
  To: dev; +Cc: yuanhan.liu, zhihong.wang, lining18, Jianfeng Tan

When virtio_user is used with OVS-DPDK (with mq disabled), it cannot
receive any packets.

It's because when vhost provides VHOST_USER_GET_PROTOCOL_FEATURES,
all queue pairs are initialized in the disabled state. Quote
QEMU/docs/specs/vhost-user.txt:
    If VHOST_USER_F_PROTOCOL_FEATURES has not been negotiated, the
    ring is initialized in an enabled state.
    If VHOST_USER_F_PROTOCOL_FEATURES has been negotiated, the ring
    is initialized in a disabled state.

In OVS-DPDK, all queue pairs are in the disabled state by default.
When used with QEMU, QEMU will set it as enabled in the process of
initialization. So the fix here is to include similar logic in
virtio_user.

Fixes: 37a7eb2ae816 ("net/virtio-user: add device emulation layer")

Reported-by: Ning Li <lining18@jd.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
---
 drivers/net/virtio/virtio_user/virtio_user_dev.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 376c9cf..2c4e999 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -131,6 +131,13 @@ virtio_user_start_device(struct virtio_user_dev *dev)
 		}
 	}
 
+	/* As this feature is negotiated from the vhost, all queues are
+	 * initialized in the disabled state. For non-mq case, we enable
+	 * the 1st queue pair by default.
+	 */
+	if (dev->features & (1ull << VHOST_USER_GET_PROTOCOL_FEATURES))
+		vhost_user_enable_queue_pair(dev->vhostfd, 0, 1);
+
 	/* After setup all virtqueues, we need to set_features so that these
 	 * features can be set into each virtqueue in vhost side. And before
 	 * that, make sure VHOST_USER_F_PROTOCOL_FEATURES is added if mq is
-- 
2.7.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages
  2016-08-05 11:36 [dpdk-dev] [PATCH 0/3] fix virtio_user issues Jianfeng Tan
  2016-08-05 11:36 ` [dpdk-dev] [PATCH 1/3] net/virtio_user: fix queue pair not enabled Jianfeng Tan
@ 2016-08-05 11:36 ` Jianfeng Tan
  2016-08-05 16:36   ` Stephen Hemminger
  2016-09-06  6:42   ` Yuanhan Liu
  2016-08-05 11:36 ` [dpdk-dev] [PATCH 3/3] net/virtio_user: fix dev not freed after init error Jianfeng Tan
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 26+ messages in thread
From: Jianfeng Tan @ 2016-08-05 11:36 UTC (permalink / raw)
  To: dev; +Cc: yuanhan.liu, zhihong.wang, lining18, Jianfeng Tan

When virtio_user is used with VPP's native vhost user, it cannot
send/receive any packets.

The root cause is that vpp-vhost-user translates the message
VHOST_USER_SET_FEATURES as puting this device into init state,
aka, zero all related structures. However, previous code
puts this message at last in the whole initialization process,
which leads to all previous information are zeroed.

To fix this issue, we rearrange the sequence of those messages.
  - step 0, send VHOST_USER_SET_VRING_CALL so that vhost allocates
    virtqueue structures;
  - step 1, send VHOST_USER_SET_FEATURES to confirm the features;
  - step 2, send VHOST_USER_SET_MEM_TABLE to share mem regions;
  - step 3, send VHOST_USER_SET_VRING_NUM, VHOST_USER_SET_VRING_BASE,
    VHOST_USER_SET_VRING_ADDR, VHOST_USER_SET_VRING_KICK for each
    queue;
  - ...

Fixes: 37a7eb2ae816 ("net/virtio-user: add device emulation layer")

Reported-by: Zhihong Wang <zhihong.wang@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
---
 drivers/net/virtio/virtio_user/virtio_user_dev.c | 120 ++++++++++++++---------
 1 file changed, 72 insertions(+), 48 deletions(-)

diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 2c4e999..afdf721 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -45,20 +45,14 @@
 #include "../virtio_ethdev.h"
 
 static int
-virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+virtio_user_create_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
 {
-	int callfd, kickfd;
+	/* Of all per virtqueue MSGs, make sure VHOST_SET_VRING_CALL come
+	 * firstly because vhost depends on this msg to allocate virtqueue
+	 * pair.
+	 */
+	int callfd;
 	struct vhost_vring_file file;
-	struct vhost_vring_state state;
-	struct vring *vring = &dev->vrings[queue_sel];
-	struct vhost_vring_addr addr = {
-		.index = queue_sel,
-		.desc_user_addr = (uint64_t)(uintptr_t)vring->desc,
-		.avail_user_addr = (uint64_t)(uintptr_t)vring->avail,
-		.used_user_addr = (uint64_t)(uintptr_t)vring->used,
-		.log_guest_addr = 0,
-		.flags = 0, /* disable log */
-	};
 
 	/* May use invalid flag, but some backend leverages kickfd and callfd as
 	 * criteria to judge if dev is alive. so finally we use real event_fd.
@@ -68,22 +62,30 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
 		PMD_DRV_LOG(ERR, "callfd error, %s\n", strerror(errno));
 		return -1;
 	}
-	kickfd = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
-	if (kickfd < 0) {
-		close(callfd);
-		PMD_DRV_LOG(ERR, "kickfd error, %s\n", strerror(errno));
-		return -1;
-	}
-
-	/* Of all per virtqueue MSGs, make sure VHOST_SET_VRING_CALL come
-	 * firstly because vhost depends on this msg to allocate virtqueue
-	 * pair.
-	 */
 	file.index = queue_sel;
 	file.fd = callfd;
 	vhost_user_sock(dev->vhostfd, VHOST_USER_SET_VRING_CALL, &file);
 	dev->callfds[queue_sel] = callfd;
 
+	return 0;
+}
+
+static int
+virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+	int kickfd;
+	struct vhost_vring_file file;
+	struct vhost_vring_state state;
+	struct vring *vring = &dev->vrings[queue_sel];
+	struct vhost_vring_addr addr = {
+		.index = queue_sel,
+		.desc_user_addr = (uint64_t)(uintptr_t)vring->desc,
+		.avail_user_addr = (uint64_t)(uintptr_t)vring->avail,
+		.used_user_addr = (uint64_t)(uintptr_t)vring->used,
+		.log_guest_addr = 0,
+		.flags = 0, /* disable log */
+	};
+
 	state.index = queue_sel;
 	state.num = vring->num;
 	vhost_user_sock(dev->vhostfd, VHOST_USER_SET_VRING_NUM, &state);
@@ -97,6 +99,12 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
 	 * lastly because vhost depends on this msg to judge if
 	 * virtio is ready.
 	 */
+	kickfd = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
+	if (kickfd < 0) {
+		PMD_DRV_LOG(ERR, "kickfd error, %s\n", strerror(errno));
+		return -1;
+	}
+	file.index = queue_sel;
 	file.fd = kickfd;
 	vhost_user_sock(dev->vhostfd, VHOST_USER_SET_VRING_KICK, &file);
 	dev->kickfds[queue_sel] = kickfd;
@@ -104,44 +112,43 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
 	return 0;
 }
 
-int
-virtio_user_start_device(struct virtio_user_dev *dev)
+static int
+virtio_user_queue_setup(struct virtio_user_dev *dev,
+			int (*fn)(struct virtio_user_dev *, uint32_t))
 {
-	uint64_t features;
 	uint32_t i, queue_sel;
-	int ret;
-
-	/* construct memory region inside each implementation */
-	ret = vhost_user_sock(dev->vhostfd, VHOST_USER_SET_MEM_TABLE, NULL);
-	if (ret < 0)
-		goto error;
 
 	for (i = 0; i < dev->max_queue_pairs; ++i) {
 		queue_sel = 2 * i + VTNET_SQ_RQ_QUEUE_IDX;
-		if (virtio_user_kick_queue(dev, queue_sel) < 0) {
-			PMD_DRV_LOG(INFO, "kick rx vq fails: %u", i);
-			goto error;
+		if (fn(dev, queue_sel) < 0) {
+			PMD_DRV_LOG(INFO, "setup rx vq fails: %u", i);
+			return -1;
 		}
 	}
 	for (i = 0; i < dev->max_queue_pairs; ++i) {
 		queue_sel = 2 * i + VTNET_SQ_TQ_QUEUE_IDX;
-		if (virtio_user_kick_queue(dev, queue_sel) < 0) {
-			PMD_DRV_LOG(INFO, "kick tx vq fails: %u", i);
-			goto error;
+		if (fn(dev, queue_sel) < 0) {
+			PMD_DRV_LOG(INFO, "setup tx vq fails: %u", i);
+			return -1;
 		}
 	}
 
-	/* As this feature is negotiated from the vhost, all queues are
-	 * initialized in the disabled state. For non-mq case, we enable
-	 * the 1st queue pair by default.
-	 */
-	if (dev->features & (1ull << VHOST_USER_GET_PROTOCOL_FEATURES))
-		vhost_user_enable_queue_pair(dev->vhostfd, 0, 1);
+	return 0;
+}
 
-	/* After setup all virtqueues, we need to set_features so that these
-	 * features can be set into each virtqueue in vhost side. And before
-	 * that, make sure VHOST_USER_F_PROTOCOL_FEATURES is added if mq is
-	 * enabled, and VIRTIO_NET_F_MAC is stripped.
+int
+virtio_user_start_device(struct virtio_user_dev *dev)
+{
+	uint64_t features;
+	int ret;
+
+	/* Step 0: tell vhost to create queues */
+	if (virtio_user_queue_setup(dev, virtio_user_create_queue) < 0)
+		goto error;
+
+	/* Step 1: set features
+	 * Make sure VHOST_USER_F_PROTOCOL_FEATURES is added if mq is enabled,
+	 * and VIRTIO_NET_F_MAC is stripped.
 	 */
 	features = dev->features;
 	if (dev->max_queue_pairs > 1)
@@ -152,6 +159,23 @@ virtio_user_start_device(struct virtio_user_dev *dev)
 		goto error;
 	PMD_DRV_LOG(INFO, "set features: %" PRIx64, features);
 
+	/* Step 2: share memory regions */
+	ret = vhost_user_sock(dev->vhostfd, VHOST_USER_SET_MEM_TABLE, NULL);
+	if (ret < 0)
+		goto error;
+
+	/* Step 3: kick queues */
+	if (virtio_user_queue_setup(dev, virtio_user_kick_queue) < 0)
+		goto error;
+
+	/* Step 4: enable queues
+	 * As this feature is negotiated from the vhost, all queues are
+	 * initialized in the disabled state. For non-mq case, we enable
+	 * the 1st queue pair by default.
+	 */
+	if (dev->features & (1ull << VHOST_USER_GET_PROTOCOL_FEATURES))
+		vhost_user_enable_queue_pair(dev->vhostfd, 0, 1);
+
 	return 0;
 error:
 	/* TODO: free resource here or caller to check */
-- 
2.7.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [dpdk-dev] [PATCH 3/3] net/virtio_user: fix dev not freed after init error
  2016-08-05 11:36 [dpdk-dev] [PATCH 0/3] fix virtio_user issues Jianfeng Tan
  2016-08-05 11:36 ` [dpdk-dev] [PATCH 1/3] net/virtio_user: fix queue pair not enabled Jianfeng Tan
  2016-08-05 11:36 ` [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages Jianfeng Tan
@ 2016-08-05 11:36 ` Jianfeng Tan
  2016-08-05 16:34   ` Stephen Hemminger
  2016-08-29  7:01 ` [dpdk-dev] [PATCH 0/3] fix virtio_user issues Christian Ehrhardt
  2016-09-27 19:11 ` [dpdk-dev] [PATCH v2 " Jianfeng Tan
  4 siblings, 1 reply; 26+ messages in thread
From: Jianfeng Tan @ 2016-08-05 11:36 UTC (permalink / raw)
  To: dev; +Cc: yuanhan.liu, zhihong.wang, lining18, Jianfeng Tan

Currently, when virtio_user device fails to be started (e.g., vhost
unix socket does not exit), the init function does not return struct
rte_eth_dev (and some other structs) back to ether layer. And what's
more, it does not report the error to upper layer.

The fix is to free those structs and report error when failing to
start virtio_user devices.

Fixes: ce2eabdd43ec ("net/virtio-user: add virtual device")

Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
---
 drivers/net/virtio/virtio_user_ethdev.c | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index daef09b..62ccb0b 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -313,6 +313,17 @@ virtio_user_eth_dev_alloc(const char *name)
 	return eth_dev;
 }
 
+static void
+virtio_user_eth_dev_free(struct rte_eth_dev *eth_dev)
+{
+	struct rte_eth_dev_data *data = eth_dev->data;
+	struct virtio_hw *hw = data->dev_private;
+
+	rte_free(hw->virtio_user_dev);
+	rte_free(hw);
+	rte_eth_dev_release_port(eth_dev);
+}
+
 /* Dev initialization routine. Invoked once for each virtio vdev at
  * EAL init time, see rte_eal_dev_init().
  * Returns 0 on success.
@@ -328,7 +339,7 @@ virtio_user_pmd_devinit(const char *name, const char *params)
 	uint64_t queue_size = VIRTIO_USER_DEF_Q_SZ;
 	char *path = NULL;
 	char *mac_addr = NULL;
-	int ret = -1;
+	int result = -1, ret;
 
 	if (!params || params[0] == '\0') {
 		PMD_INIT_LOG(ERR, "arg %s is mandatory for virtio_user",
@@ -411,15 +422,19 @@ virtio_user_pmd_devinit(const char *name, const char *params)
 
 	hw = eth_dev->data->dev_private;
 	if (virtio_user_dev_init(hw->virtio_user_dev, path, queues, cq,
-				 queue_size, mac_addr) < 0)
+				 queue_size, mac_addr) < 0) {
+		PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
+		virtio_user_eth_dev_free(eth_dev);
 		goto end;
+	}
 
 	/* previously called by rte_eal_pci_probe() for physical dev */
 	if (eth_virtio_dev_init(eth_dev) < 0) {
 		PMD_INIT_LOG(ERR, "eth_virtio_dev_init fails");
+		virtio_user_eth_dev_free(eth_dev);
 		goto end;
 	}
-	ret = 0;
+	result = 0;
 
 end:
 	if (kvlist)
@@ -428,7 +443,7 @@ end:
 		free(path);
 	if (mac_addr)
 		free(mac_addr);
-	return ret;
+	return result;
 }
 
 /** Called by rte_eth_dev_detach() */
-- 
2.7.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 3/3] net/virtio_user: fix dev not freed after init error
  2016-08-05 11:36 ` [dpdk-dev] [PATCH 3/3] net/virtio_user: fix dev not freed after init error Jianfeng Tan
@ 2016-08-05 16:34   ` Stephen Hemminger
  2016-08-08  1:07     ` Tan, Jianfeng
  0 siblings, 1 reply; 26+ messages in thread
From: Stephen Hemminger @ 2016-08-05 16:34 UTC (permalink / raw)
  To: Jianfeng Tan; +Cc: dev, yuanhan.liu, zhihong.wang, lining18

On Fri,  5 Aug 2016 11:36:43 +0000
Jianfeng Tan <jianfeng.tan@intel.com> wrote:

> diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
> index daef09b..62ccb0b 100644
> --- a/drivers/net/virtio/virtio_user_ethdev.c
> +++ b/drivers/net/virtio/virtio_user_ethdev.c
> @@ -313,6 +313,17 @@ virtio_user_eth_dev_alloc(const char *name)
>  	return eth_dev;
>  }
>  
> +static void
> +virtio_user_eth_dev_free(struct rte_eth_dev *eth_dev)
> +{
> +	struct rte_eth_dev_data *data = eth_dev->data;
> +	struct virtio_hw *hw = data->dev_private;
> +
> +	rte_free(hw->virtio_user_dev);
> +	rte_free(hw);
> +	rte_eth_dev_release_port(eth_dev);
> +}
> +
>  /* Dev initialization routine. Invoked once for each virtio vdev at
>   * EAL init time, see rte_eal_dev_init().
>   * Returns 0 on success.
> @@ -328,7 +339,7 @@ virtio_user_pmd_devinit(const char *name, const char *params)
>  	uint64_t queue_size = VIRTIO_USER_DEF_Q_SZ;
>  	char *path = NULL;
>  	char *mac_addr = NULL;
> -	int ret = -1;
> +	int result = -1, ret;

It is not clear why two return value variables are needed?
>  
>  	if (!params || params[0] == '\0') {
>  		PMD_INIT_LOG(ERR, "arg %s is mandatory for virtio_user",
> @@ -411,15 +422,19 @@ virtio_user_pmd_devinit(const char *name, const char *params)
>  
>  	hw = eth_dev->data->dev_private;
>  	if (virtio_user_dev_init(hw->virtio_user_dev, path, queues, cq,
> -				 queue_size, mac_addr) < 0)
> +				 queue_size, mac_addr) < 0) {
> +		PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
> +		virtio_user_eth_dev_free(eth_dev);
>  		goto end;
> +	}
>  
>  	/* previously called by rte_eal_pci_probe() for physical dev */
>  	if (eth_virtio_dev_init(eth_dev) < 0) {
>  		PMD_INIT_LOG(ERR, "eth_virtio_dev_init fails");
> +		virtio_user_eth_dev_free(eth_dev);
>  		goto end;
>  	}
> -	ret = 0;
> +	result = 0;
>  
>  end:
>  	if (kvlist)
> @@ -428,7 +443,7 @@ end:
>  		free(path);
>  	if (mac_addr)
>  		free(mac_addr);

Unrelated, but this code could eliminate those if () tests.

> -	return ret;
> +	return result;
>  }

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages
  2016-08-05 11:36 ` [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages Jianfeng Tan
@ 2016-08-05 16:36   ` Stephen Hemminger
  2016-08-08  1:19     ` Tan, Jianfeng
  2016-09-06  6:42   ` Yuanhan Liu
  1 sibling, 1 reply; 26+ messages in thread
From: Stephen Hemminger @ 2016-08-05 16:36 UTC (permalink / raw)
  To: Jianfeng Tan; +Cc: dev, yuanhan.liu, zhihong.wang, lining18

On Fri,  5 Aug 2016 11:36:42 +0000
Jianfeng Tan <jianfeng.tan@intel.com> wrote:

> When virtio_user is used with VPP's native vhost user, it cannot
> send/receive any packets.
> 
> The root cause is that vpp-vhost-user translates the message
> VHOST_USER_SET_FEATURES as puting this device into init state,
> aka, zero all related structures. However, previous code
> puts this message at last in the whole initialization process,
> which leads to all previous information are zeroed.

Not sure what correct behavior is here.  It could be that VPP native
vhost user is broken.  What does QEMU/KVM vhost do in this case?
I would take that as the authoritative source for semantics.

> To fix this issue, we rearrange the sequence of those messages.
>   - step 0, send VHOST_USER_SET_VRING_CALL so that vhost allocates
>     virtqueue structures;
>   - step 1, send VHOST_USER_SET_FEATURES to confirm the features;
>   - step 2, send VHOST_USER_SET_MEM_TABLE to share mem regions;
>   - step 3, send VHOST_USER_SET_VRING_NUM, VHOST_USER_SET_VRING_BASE,
>     VHOST_USER_SET_VRING_ADDR, VHOST_USER_SET_VRING_KICK for each
>     queue;
>   - ...
> 
> Fixes: 37a7eb2ae816 ("net/virtio-user: add device emulation layer")

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 3/3] net/virtio_user: fix dev not freed after init error
  2016-08-05 16:34   ` Stephen Hemminger
@ 2016-08-08  1:07     ` Tan, Jianfeng
  0 siblings, 0 replies; 26+ messages in thread
From: Tan, Jianfeng @ 2016-08-08  1:07 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, yuanhan.liu, Wang, Zhihong, lining18

Hi Stephen,

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Saturday, August 6, 2016 12:34 AM
> To: Tan, Jianfeng
> Cc: dev@dpdk.org; yuanhan.liu@linux.intel.com; Wang, Zhihong;
> lining18@jd.com
> Subject: Re: [dpdk-dev] [PATCH 3/3] net/virtio_user: fix dev not freed after
> init error
> 
> On Fri,  5 Aug 2016 11:36:43 +0000
> Jianfeng Tan <jianfeng.tan@intel.com> wrote:
> 
> > diff --git a/drivers/net/virtio/virtio_user_ethdev.c
> b/drivers/net/virtio/virtio_user_ethdev.c
> > index daef09b..62ccb0b 100644
> > --- a/drivers/net/virtio/virtio_user_ethdev.c
> > +++ b/drivers/net/virtio/virtio_user_ethdev.c
> > @@ -313,6 +313,17 @@ virtio_user_eth_dev_alloc(const char *name)
> >  	return eth_dev;
> >  }
> >
> > +static void
> > +virtio_user_eth_dev_free(struct rte_eth_dev *eth_dev)
> > +{
> > +	struct rte_eth_dev_data *data = eth_dev->data;
> > +	struct virtio_hw *hw = data->dev_private;
> > +
> > +	rte_free(hw->virtio_user_dev);
> > +	rte_free(hw);
> > +	rte_eth_dev_release_port(eth_dev);
> > +}
> > +
> >  /* Dev initialization routine. Invoked once for each virtio vdev at
> >   * EAL init time, see rte_eal_dev_init().
> >   * Returns 0 on success.
> > @@ -328,7 +339,7 @@ virtio_user_pmd_devinit(const char *name, const
> char *params)
> >  	uint64_t queue_size = VIRTIO_USER_DEF_Q_SZ;
> >  	char *path = NULL;
> >  	char *mac_addr = NULL;
> > -	int ret = -1;
> > +	int result = -1, ret;
> 
> It is not clear why two return value variables are needed?

My purpose was to make "ret" to record the return value of intermediate functions, and "result" to record that of current method. Any convention to do that?

If it introduces confusions, I'll change to use only one return value variables.


> >
> >  	if (!params || params[0] == '\0') {
> >  		PMD_INIT_LOG(ERR, "arg %s is mandatory for virtio_user",
> > @@ -411,15 +422,19 @@ virtio_user_pmd_devinit(const char *name,
> const char *params)
> >
> >  	hw = eth_dev->data->dev_private;
> >  	if (virtio_user_dev_init(hw->virtio_user_dev, path, queues, cq,
> > -				 queue_size, mac_addr) < 0)
> > +				 queue_size, mac_addr) < 0) {
> > +		PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
> > +		virtio_user_eth_dev_free(eth_dev);
> >  		goto end;
> > +	}
> >
> >  	/* previously called by rte_eal_pci_probe() for physical dev */
> >  	if (eth_virtio_dev_init(eth_dev) < 0) {
> >  		PMD_INIT_LOG(ERR, "eth_virtio_dev_init fails");
> > +		virtio_user_eth_dev_free(eth_dev);
> >  		goto end;
> >  	}
> > -	ret = 0;
> > +	result = 0;
> >
> >  end:
> >  	if (kvlist)
> > @@ -428,7 +443,7 @@ end:
> >  		free(path);
> >  	if (mac_addr)
> >  		free(mac_addr);
> 
> Unrelated, but this code could eliminate those if () tests.

Thanks for the hint. Yes, as manual says, "If ptr is NULL, no operation is performed". "if"s can be eliminated.

Thanks,
Jianfeng

> 
> > -	return ret;
> > +	return result;
> >  }

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages
  2016-08-05 16:36   ` Stephen Hemminger
@ 2016-08-08  1:19     ` Tan, Jianfeng
  0 siblings, 0 replies; 26+ messages in thread
From: Tan, Jianfeng @ 2016-08-08  1:19 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, yuanhan.liu, Wang, Zhihong, lining18

Hi Stephen,

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Saturday, August 6, 2016 12:36 AM
> To: Tan, Jianfeng
> Cc: dev@dpdk.org; yuanhan.liu@linux.intel.com; Wang, Zhihong;
> lining18@jd.com
> Subject: Re: [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of
> messages
> 
> On Fri,  5 Aug 2016 11:36:42 +0000
> Jianfeng Tan <jianfeng.tan@intel.com> wrote:
> 
> > When virtio_user is used with VPP's native vhost user, it cannot
> > send/receive any packets.
> >
> > The root cause is that vpp-vhost-user translates the message
> > VHOST_USER_SET_FEATURES as puting this device into init state,
> > aka, zero all related structures. However, previous code
> > puts this message at last in the whole initialization process,
> > which leads to all previous information are zeroed.
> 
> Not sure what correct behavior is here.  It could be that VPP native
> vhost user is broken.  What does QEMU/KVM vhost do in this case?
> I would take that as the authoritative source for semantics.

Below corrective message sequence is as per QEMU's behavior. One more thing, QEMU does not have any docs for this, and it's figured out through how the vhost receives messages from QEMU.

Thanks,
Jianfeng

> 
> > To fix this issue, we rearrange the sequence of those messages.
> >   - step 0, send VHOST_USER_SET_VRING_CALL so that vhost allocates
> >     virtqueue structures;
> >   - step 1, send VHOST_USER_SET_FEATURES to confirm the features;
> >   - step 2, send VHOST_USER_SET_MEM_TABLE to share mem regions;
> >   - step 3, send VHOST_USER_SET_VRING_NUM,
> VHOST_USER_SET_VRING_BASE,
> >     VHOST_USER_SET_VRING_ADDR, VHOST_USER_SET_VRING_KICK for
> each
> >     queue;
> >   - ...
> >
> > Fixes: 37a7eb2ae816 ("net/virtio-user: add device emulation layer")

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 0/3] fix virtio_user issues
  2016-08-05 11:36 [dpdk-dev] [PATCH 0/3] fix virtio_user issues Jianfeng Tan
                   ` (2 preceding siblings ...)
  2016-08-05 11:36 ` [dpdk-dev] [PATCH 3/3] net/virtio_user: fix dev not freed after init error Jianfeng Tan
@ 2016-08-29  7:01 ` Christian Ehrhardt
  2016-09-27 19:11 ` [dpdk-dev] [PATCH v2 " Jianfeng Tan
  4 siblings, 0 replies; 26+ messages in thread
From: Christian Ehrhardt @ 2016-08-29  7:01 UTC (permalink / raw)
  To: Jianfeng Tan, stable; +Cc: dev, Yuanhan Liu, zhihong.wang, lining18

On Fri, Aug 5, 2016 at 1:36 PM, Jianfeng Tan <jianfeng.tan@intel.com> wrote:

> Patch 1: fix issue when using virtio_user with OVS-DPDK.
> Patch 2: fix issue when using virtio_user with VPP.
> Patch 3: fix issue when failing to start virtio_user devices.
>
> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
>
> Jianfeng Tan (3):
>   net/virtio_user: fix queue pair not enabled
>   net/virtio_user: fix wrong sequence of messages
>   net/virtio_user: fix dev not freed after init error
>
>
Hi,
I think those were post 16.07 - would they be reasonable for the stable
tree?

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 1/3] net/virtio_user: fix queue pair not enabled
  2016-08-05 11:36 ` [dpdk-dev] [PATCH 1/3] net/virtio_user: fix queue pair not enabled Jianfeng Tan
@ 2016-09-06  6:30   ` Yuanhan Liu
  0 siblings, 0 replies; 26+ messages in thread
From: Yuanhan Liu @ 2016-09-06  6:30 UTC (permalink / raw)
  To: Jianfeng Tan; +Cc: dev, zhihong.wang, lining18

On Fri, Aug 05, 2016 at 11:36:41AM +0000, Jianfeng Tan wrote:
> When virtio_user is used with OVS-DPDK (with mq disabled), it cannot
> receive any packets.
> 
> It's because when vhost provides VHOST_USER_GET_PROTOCOL_FEATURES,
> all queue pairs are initialized in the disabled state. Quote
> QEMU/docs/specs/vhost-user.txt:
>     If VHOST_USER_F_PROTOCOL_FEATURES has not been negotiated, the
>     ring is initialized in an enabled state.
>     If VHOST_USER_F_PROTOCOL_FEATURES has been negotiated, the ring
>     is initialized in a disabled state.
> 
> In OVS-DPDK, all queue pairs are in the disabled state by default.
> When used with QEMU, QEMU will set it as enabled in the process of
> initialization. So the fix here is to include similar logic in
> virtio_user.

The behaviour is actually defined by virtio spec; the vhost-user just
follows it (5.1.6.5.5 Automatic receive steering in multiqueue mode):

    Multiqueue is disabled by default. The driver enables multiqueue by
    executing the VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET command, specifying
    the number of the transmit and receive queues to be used up to
    max_virtqueue_pairs ....

That is to say, it has nothing to do with OVS-DPDK; yet it's not caused
by vhost-user. For QEMU, as the the virtio-net device emulator, it just
follows the spec: it enables the 1st queue pair whenever MQ is enabled
or not. Even the MQ __feature__ is enabled and queue=2 is given, only
the 1st queue pair will be enabled. For linux virtio-net driver, you
have to use ethtool to enable more queues if you want.

However, for virtio-user, the yet another virtio-net device emulator,
it didn't follow the rule, thus something went wrong.

Put simply, you need make sure the 1st queue pair is enabled by default
and the rest (if any, say when queues=2 is given) are disabled in the
virtio-user initial stage (say, at virtio_user_dev_init()).

	--yliu
> 
> Fixes: 37a7eb2ae816 ("net/virtio-user: add device emulation layer")
> 
> Reported-by: Ning Li <lining18@jd.com>
> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
> ---
>  drivers/net/virtio/virtio_user/virtio_user_dev.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 376c9cf..2c4e999 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -131,6 +131,13 @@ virtio_user_start_device(struct virtio_user_dev *dev)
>  		}
>  	}
>  
> +	/* As this feature is negotiated from the vhost, all queues are
> +	 * initialized in the disabled state. For non-mq case, we enable
> +	 * the 1st queue pair by default.
> +	 */
> +	if (dev->features & (1ull << VHOST_USER_GET_PROTOCOL_FEATURES))
> +		vhost_user_enable_queue_pair(dev->vhostfd, 0, 1);
> +
>  	/* After setup all virtqueues, we need to set_features so that these
>  	 * features can be set into each virtqueue in vhost side. And before
>  	 * that, make sure VHOST_USER_F_PROTOCOL_FEATURES is added if mq is
> -- 
> 2.7.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages
  2016-08-05 11:36 ` [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages Jianfeng Tan
  2016-08-05 16:36   ` Stephen Hemminger
@ 2016-09-06  6:42   ` Yuanhan Liu
  2016-09-06  7:54     ` Tan, Jianfeng
  1 sibling, 1 reply; 26+ messages in thread
From: Yuanhan Liu @ 2016-09-06  6:42 UTC (permalink / raw)
  To: Jianfeng Tan; +Cc: dev, zhihong.wang, lining18

On Fri, Aug 05, 2016 at 11:36:42AM +0000, Jianfeng Tan wrote:
> When virtio_user is used with VPP's native vhost user, it cannot
> send/receive any packets.
> 
> The root cause is that vpp-vhost-user translates the message
> VHOST_USER_SET_FEATURES as puting this device into init state,
> aka, zero all related structures. However, previous code
> puts this message at last in the whole initialization process,
> which leads to all previous information are zeroed.
> 
> To fix this issue, we rearrange the sequence of those messages.
>   - step 0, send VHOST_USER_SET_VRING_CALL so that vhost allocates
>     virtqueue structures;

Yes, it is. However, it's not that right to do that (you see there is
a FIXME in vhost_user_set_vring_call()).

That means it need be fixed: we should not rely on fact that it's the
first per-vring message we will get in the current QEMU implementation
as the truth.

That also means, naming a function like virtio_user_create_queue() based
on above behaviour is wrong.

>   - step 1, send VHOST_USER_SET_FEATURES to confirm the features;
>   - step 2, send VHOST_USER_SET_MEM_TABLE to share mem regions;
>   - step 3, send VHOST_USER_SET_VRING_NUM, VHOST_USER_SET_VRING_BASE,
>     VHOST_USER_SET_VRING_ADDR, VHOST_USER_SET_VRING_KICK for each
>     queue;
>   - ...
> 
> Fixes: 37a7eb2ae816 ("net/virtio-user: add device emulation layer")
> 
> Reported-by: Zhihong Wang <zhihong.wang@intel.com>
> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
> ---
>  drivers/net/virtio/virtio_user/virtio_user_dev.c | 120 ++++++++++++++---------
>  1 file changed, 72 insertions(+), 48 deletions(-)

That's too much of code for a bug fix. I'm wondering how about just
moving VHOST_USER_GET_PROTOCOL_FEATURES ahead, to the begining of
virtio_user_start_device()? It should fix this issue.

	--yliu

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages
  2016-09-06  6:42   ` Yuanhan Liu
@ 2016-09-06  7:54     ` Tan, Jianfeng
  2016-09-06  8:20       ` Yuanhan Liu
  0 siblings, 1 reply; 26+ messages in thread
From: Tan, Jianfeng @ 2016-09-06  7:54 UTC (permalink / raw)
  To: Yuanhan Liu; +Cc: dev, zhihong.wang, lining18

Hi Yuanhan,


On 9/6/2016 2:42 PM, Yuanhan Liu wrote:
> On Fri, Aug 05, 2016 at 11:36:42AM +0000, Jianfeng Tan wrote:
>> When virtio_user is used with VPP's native vhost user, it cannot
>> send/receive any packets.
>>
>> The root cause is that vpp-vhost-user translates the message
>> VHOST_USER_SET_FEATURES as puting this device into init state,
>> aka, zero all related structures. However, previous code
>> puts this message at last in the whole initialization process,
>> which leads to all previous information are zeroed.
>>
>> To fix this issue, we rearrange the sequence of those messages.
>>    - step 0, send VHOST_USER_SET_VRING_CALL so that vhost allocates
>>      virtqueue structures;
> Yes, it is. However, it's not that right to do that (you see there is
> a FIXME in vhost_user_set_vring_call()).

I suppose you are specifying vhost_set_vring_call().

>
> That means it need be fixed: we should not rely on fact that it's the
> first per-vring message we will get in the current QEMU implementation
> as the truth.
>
> That also means, naming a function like virtio_user_create_queue() based
> on above behaviour is wrong.

It's actually a good catch. After a light thought, I think in DPDK 
vhost, we may need to create those virtqueues once unix socket gets 
connected, just like in vhost-net, virtqueues are created on char file 
open. Right?

>
>>    - step 1, send VHOST_USER_SET_FEATURES to confirm the features;
>>    - step 2, send VHOST_USER_SET_MEM_TABLE to share mem regions;
>>    - step 3, send VHOST_USER_SET_VRING_NUM, VHOST_USER_SET_VRING_BASE,
>>      VHOST_USER_SET_VRING_ADDR, VHOST_USER_SET_VRING_KICK for each
>>      queue;
>>    - ...
>>
>> Fixes: 37a7eb2ae816 ("net/virtio-user: add device emulation layer")
>>
>> Reported-by: Zhihong Wang <zhihong.wang@intel.com>
>> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
>> ---
>>   drivers/net/virtio/virtio_user/virtio_user_dev.c | 120 ++++++++++++++---------
>>   1 file changed, 72 insertions(+), 48 deletions(-)
> That's too much of code for a bug fix. I'm wondering how about just
> moving VHOST_USER_GET_PROTOCOL_FEATURES ahead, to the begining of
> virtio_user_start_device()? It should fix this issue.

Why does VHOST_USER_GET_PROTOCOL_FEATURES care? Do you mean shifting 
VHOST_USER_SET_FEATURES earlier?

Thanks,
Jianfeng

>
> 	--yliu

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages
  2016-09-06  7:54     ` Tan, Jianfeng
@ 2016-09-06  8:20       ` Yuanhan Liu
  2016-09-08  8:53         ` Tan, Jianfeng
  0 siblings, 1 reply; 26+ messages in thread
From: Yuanhan Liu @ 2016-09-06  8:20 UTC (permalink / raw)
  To: Tan, Jianfeng; +Cc: dev, zhihong.wang, lining18

On Tue, Sep 06, 2016 at 03:54:30PM +0800, Tan, Jianfeng wrote:
> Hi Yuanhan,
> 
> 
> On 9/6/2016 2:42 PM, Yuanhan Liu wrote:
> >On Fri, Aug 05, 2016 at 11:36:42AM +0000, Jianfeng Tan wrote:
> >>When virtio_user is used with VPP's native vhost user, it cannot
> >>send/receive any packets.
> >>
> >>The root cause is that vpp-vhost-user translates the message
> >>VHOST_USER_SET_FEATURES as puting this device into init state,
> >>aka, zero all related structures. However, previous code
> >>puts this message at last in the whole initialization process,
> >>which leads to all previous information are zeroed.
> >>
> >>To fix this issue, we rearrange the sequence of those messages.
> >>   - step 0, send VHOST_USER_SET_VRING_CALL so that vhost allocates
> >>     virtqueue structures;
> >Yes, it is. However, it's not that right to do that (you see there is
> >a FIXME in vhost_user_set_vring_call()).
> 
> I suppose you are specifying vhost_set_vring_call().

Oh, I was talking about the new code: I have renamed it to
vhost_user_set_vring_call :)

> >
> >That means it need be fixed: we should not rely on fact that it's the
> >first per-vring message we will get in the current QEMU implementation
> >as the truth.
> >
> >That also means, naming a function like virtio_user_create_queue() based
> >on above behaviour is wrong.
> 
> It's actually a good catch. After a light thought, I think in DPDK vhost, we
> may need to create those virtqueues once unix socket gets connected, just
> like in vhost-net, virtqueues are created on char file open. Right?

There is a difference: for vhost-net and tap mode, IIRC, it knows how
many queues before doing setup. While for vhost-user, it doesn't. That
means, we have to allocate and setup virtqueues reactively: just like
what we have done in vhost_set_vring_call(). What doesn't look perfect
is it assume SET_VRING_CALL is the first per-vring message we will get.

> 
> >
> >>   - step 1, send VHOST_USER_SET_FEATURES to confirm the features;
> >>   - step 2, send VHOST_USER_SET_MEM_TABLE to share mem regions;
> >>   - step 3, send VHOST_USER_SET_VRING_NUM, VHOST_USER_SET_VRING_BASE,
> >>     VHOST_USER_SET_VRING_ADDR, VHOST_USER_SET_VRING_KICK for each
> >>     queue;
> >>   - ...
> >>
> >>Fixes: 37a7eb2ae816 ("net/virtio-user: add device emulation layer")
> >>
> >>Reported-by: Zhihong Wang <zhihong.wang@intel.com>
> >>Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
> >>---
> >>  drivers/net/virtio/virtio_user/virtio_user_dev.c | 120 ++++++++++++++---------
> >>  1 file changed, 72 insertions(+), 48 deletions(-)
> >That's too much of code for a bug fix. I'm wondering how about just
> >moving VHOST_USER_GET_PROTOCOL_FEATURES ahead, to the begining of
> >virtio_user_start_device()? It should fix this issue.
> 
> Why does VHOST_USER_GET_PROTOCOL_FEATURES care? Do you mean shifting
> VHOST_USER_SET_FEATURES earlier?

Oops, right, I meant SET_FEATURES. Sorry about confusion introduced by
the silly auto-completion.

	--yliu

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages
  2016-09-06  8:20       ` Yuanhan Liu
@ 2016-09-08  8:53         ` Tan, Jianfeng
  2016-09-08 12:18           ` Yuanhan Liu
  0 siblings, 1 reply; 26+ messages in thread
From: Tan, Jianfeng @ 2016-09-08  8:53 UTC (permalink / raw)
  To: Yuanhan Liu; +Cc: dev, zhihong.wang, lining18



On 9/6/2016 4:20 PM, Yuanhan Liu wrote:
> On Tue, Sep 06, 2016 at 03:54:30PM +0800, Tan, Jianfeng wrote:
>> Hi Yuanhan,
>>
>>
>> On 9/6/2016 2:42 PM, Yuanhan Liu wrote:
>>> On Fri, Aug 05, 2016 at 11:36:42AM +0000, Jianfeng Tan wrote:
>>>> When virtio_user is used with VPP's native vhost user, it cannot
>>>> send/receive any packets.
>>>>
>>>> The root cause is that vpp-vhost-user translates the message
>>>> VHOST_USER_SET_FEATURES as puting this device into init state,
>>>> aka, zero all related structures. However, previous code
>>>> puts this message at last in the whole initialization process,
>>>> which leads to all previous information are zeroed.
>>>>
>>>> To fix this issue, we rearrange the sequence of those messages.
>>>>    - step 0, send VHOST_USER_SET_VRING_CALL so that vhost allocates
>>>>      virtqueue structures;
>>> Yes, it is. However, it's not that right to do that (you see there is
>>> a FIXME in vhost_user_set_vring_call()).
>> I suppose you are specifying vhost_set_vring_call().
> Oh, I was talking about the new code: I have renamed it to
> vhost_user_set_vring_call :)
>
>>> That means it need be fixed: we should not rely on fact that it's the
>>> first per-vring message we will get in the current QEMU implementation
>>> as the truth.
>>>
>>> That also means, naming a function like virtio_user_create_queue() based
>>> on above behaviour is wrong.
>> It's actually a good catch. After a light thought, I think in DPDK vhost, we
>> may need to create those virtqueues once unix socket gets connected, just
>> like in vhost-net, virtqueues are created on char file open. Right?
> There is a difference: for vhost-net and tap mode, IIRC, it knows how
> many queues before doing setup.

No, from linux/drivers/vhost/net.c:vhost_net_open(), we can see that 
virtqueues are allocated according to VHOST_NET_VQ_MAX.
How about reconsidering previous suggestion to allocate vq once 
connection is established?
Never mind, above fix on the vhost side will not take effect on existing 
vpp-vhost implementations. We still need to fix it in the virtio side.

>   While for vhost-user, it doesn't. That
> means, we have to allocate and setup virtqueues reactively: just like
> what we have done in vhost_set_vring_call(). What doesn't look perfect
> is it assume SET_VRING_CALL is the first per-vring message we will get.

Yes, depending on the assumption that SET_VRING_CALL is the first 
per-vring message, looks like a bad implementation. As Stephen has 
suggested, it's more like a bug in vpp. If we treat it like that way, we 
will fix nothing here.


>>>>    - step 1, send VHOST_USER_SET_FEATURES to confirm the features;
>>>>    - step 2, send VHOST_USER_SET_MEM_TABLE to share mem regions;
>>>>    - step 3, send VHOST_USER_SET_VRING_NUM, VHOST_USER_SET_VRING_BASE,
>>>>      VHOST_USER_SET_VRING_ADDR, VHOST_USER_SET_VRING_KICK for each
>>>>      queue;
>>>>    - ...
>>>>
>>>> Fixes: 37a7eb2ae816 ("net/virtio-user: add device emulation layer")
>>>>
>>>> Reported-by: Zhihong Wang <zhihong.wang@intel.com>
>>>> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
>>>> ---
>>>>   drivers/net/virtio/virtio_user/virtio_user_dev.c | 120 ++++++++++++++---------
>>>>   1 file changed, 72 insertions(+), 48 deletions(-)
>>> That's too much of code for a bug fix. I'm wondering how about just
>>> moving VHOST_USER_GET_PROTOCOL_FEATURES ahead, to the begining of
>>> virtio_user_start_device()? It should fix this issue.
>> Why does VHOST_USER_GET_PROTOCOL_FEATURES care? Do you mean shifting
>> VHOST_USER_SET_FEATURES earlier?
> Oops, right, I meant SET_FEATURES. Sorry about confusion introduced by
> the silly auto-completion.

Still not working. VPP needs SET_VRING_CALL to create vq firstly.

Thanks,
Jianfeng

>
> 	--yliu

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages
  2016-09-08  8:53         ` Tan, Jianfeng
@ 2016-09-08 12:18           ` Yuanhan Liu
  2016-09-09  3:59             ` Tan, Jianfeng
  0 siblings, 1 reply; 26+ messages in thread
From: Yuanhan Liu @ 2016-09-08 12:18 UTC (permalink / raw)
  To: Tan, Jianfeng; +Cc: dev, zhihong.wang, lining18

On Thu, Sep 08, 2016 at 04:53:22PM +0800, Tan, Jianfeng wrote:
> 
> 
> On 9/6/2016 4:20 PM, Yuanhan Liu wrote:
> >On Tue, Sep 06, 2016 at 03:54:30PM +0800, Tan, Jianfeng wrote:
> >>Hi Yuanhan,
> >>
> >>
> >>On 9/6/2016 2:42 PM, Yuanhan Liu wrote:
> >>>On Fri, Aug 05, 2016 at 11:36:42AM +0000, Jianfeng Tan wrote:
> >>>>When virtio_user is used with VPP's native vhost user, it cannot
> >>>>send/receive any packets.
> >>>>
> >>>>The root cause is that vpp-vhost-user translates the message
> >>>>VHOST_USER_SET_FEATURES as puting this device into init state,
> >>>>aka, zero all related structures. However, previous code
> >>>>puts this message at last in the whole initialization process,
> >>>>which leads to all previous information are zeroed.
> >>>>
> >>>>To fix this issue, we rearrange the sequence of those messages.
> >>>>   - step 0, send VHOST_USER_SET_VRING_CALL so that vhost allocates
> >>>>     virtqueue structures;
> >>>Yes, it is. However, it's not that right to do that (you see there is
> >>>a FIXME in vhost_user_set_vring_call()).
> >>I suppose you are specifying vhost_set_vring_call().
> >Oh, I was talking about the new code: I have renamed it to
> >vhost_user_set_vring_call :)
> >
> >>>That means it need be fixed: we should not rely on fact that it's the
> >>>first per-vring message we will get in the current QEMU implementation
> >>>as the truth.
> >>>
> >>>That also means, naming a function like virtio_user_create_queue() based
> >>>on above behaviour is wrong.
> >>It's actually a good catch. After a light thought, I think in DPDK vhost, we
> >>may need to create those virtqueues once unix socket gets connected, just
> >>like in vhost-net, virtqueues are created on char file open. Right?
> >There is a difference: for vhost-net and tap mode, IIRC, it knows how
> >many queues before doing setup.
> 
> No, from linux/drivers/vhost/net.c:vhost_net_open(), we can see that
> virtqueues are allocated according to VHOST_NET_VQ_MAX.

Well, if you took a closer look, you will find VHOST_NET_VQ_MAX is
defined to 2. That means it allocates a queue-pair only.

And FYI, the MQ support for vhost-net is actually done in the tap
driver, but not in vhost-net driver. That results to the MQ
implementation is a bit different between vhost-net and vhost-user.

Put simply, in vhost-net, one queue-pair has a backend fd associated
with it. Vhost requests for different queue-pair are sent by different
fd. That also means the vhost-net doesn't even need be aware of the
MQ stuff.

However, for vhost-user implementation, all queue-pairs share one
socket fd. All requests all also sent over the single socket fd,
thus the backend (DPDK vhost) has to figure out how many queue
pairs are actually enabled: and we detect it by reading the
vring index of SET_VRING_CALL message; it's not quite right though.

> How about reconsidering previous suggestion to allocate vq once connection
> is established?

That's too much, because DPDK claims to support up to 0x8000
queue-pairs. Don't even to say that the vhost_virtqueue struct
was way too big before: it even holds an array of buf_vec with
size 256.

> Never mind, above fix on the vhost side will not take effect on existing
> vpp-vhost implementations.

Actually, I was talking about the DPDK vhost implementation :)

> We still need to fix it in the virtio side.

Yes, we could fix it in our side, even though VPP is broken.

> >  While for vhost-user, it doesn't. That
> >means, we have to allocate and setup virtqueues reactively: just like
> >what we have done in vhost_set_vring_call(). What doesn't look perfect
> >is it assume SET_VRING_CALL is the first per-vring message we will get.
> 
> Yes, depending on the assumption that SET_VRING_CALL is the first per-vring
> message, looks like a bad implementation. As Stephen has suggested, it's
> more like a bug in vpp. If we treat it like that way, we will fix nothing
> here.
> 
> 
> >>>>   - step 1, send VHOST_USER_SET_FEATURES to confirm the features;
> >>>>   - step 2, send VHOST_USER_SET_MEM_TABLE to share mem regions;
> >>>>   - step 3, send VHOST_USER_SET_VRING_NUM, VHOST_USER_SET_VRING_BASE,
> >>>>     VHOST_USER_SET_VRING_ADDR, VHOST_USER_SET_VRING_KICK for each
> >>>>     queue;
> >>>>   - ...
> >>>>
> >>>>Fixes: 37a7eb2ae816 ("net/virtio-user: add device emulation layer")
> >>>>
> >>>>Reported-by: Zhihong Wang <zhihong.wang@intel.com>
> >>>>Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
> >>>>---
> >>>>  drivers/net/virtio/virtio_user/virtio_user_dev.c | 120 ++++++++++++++---------
> >>>>  1 file changed, 72 insertions(+), 48 deletions(-)
> >>>That's too much of code for a bug fix. I'm wondering how about just
> >>>moving VHOST_USER_GET_PROTOCOL_FEATURES ahead, to the begining of
> >>>virtio_user_start_device()? It should fix this issue.
> >>Why does VHOST_USER_GET_PROTOCOL_FEATURES care? Do you mean shifting
> >>VHOST_USER_SET_FEATURES earlier?
> >Oops, right, I meant SET_FEATURES. Sorry about confusion introduced by
> >the silly auto-completion.
> 
> Still not working. VPP needs SET_VRING_CALL to create vq firstly.

Didn't get it. In the proposal, SET_FEATURES is sent before every other
messages, thus it should not cause the issue you described in this patch.
Besides, haven't we already sent SET_VRING_CALL before other messages
(well, execept the SET_FEATURES and SET_MEM_TABLE message)? That's still
not enough for vpp's native vhost-user implementation?

	--yliu

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages
  2016-09-08 12:18           ` Yuanhan Liu
@ 2016-09-09  3:59             ` Tan, Jianfeng
  2016-09-09  4:19               ` Yuanhan Liu
  0 siblings, 1 reply; 26+ messages in thread
From: Tan, Jianfeng @ 2016-09-09  3:59 UTC (permalink / raw)
  To: Yuanhan Liu; +Cc: dev, zhihong.wang, lining18



On 9/8/2016 8:18 PM, Yuanhan Liu wrote:
> On Thu, Sep 08, 2016 at 04:53:22PM +0800, Tan, Jianfeng wrote:
>>
>> On 9/6/2016 4:20 PM, Yuanhan Liu wrote:
>>> On Tue, Sep 06, 2016 at 03:54:30PM +0800, Tan, Jianfeng wrote:
>>>> Hi Yuanhan,
>>>>
>>>>
>>>> On 9/6/2016 2:42 PM, Yuanhan Liu wrote:
>>>>> On Fri, Aug 05, 2016 at 11:36:42AM +0000, Jianfeng Tan wrote:
>>>>>> When virtio_user is used with VPP's native vhost user, it cannot
>>>>>> send/receive any packets.
>>>>>>
>>>>>> The root cause is that vpp-vhost-user translates the message
>>>>>> VHOST_USER_SET_FEATURES as puting this device into init state,
>>>>>> aka, zero all related structures. However, previous code
>>>>>> puts this message at last in the whole initialization process,
>>>>>> which leads to all previous information are zeroed.
>>>>>>
>>>>>> To fix this issue, we rearrange the sequence of those messages.
>>>>>>    - step 0, send VHOST_USER_SET_VRING_CALL so that vhost allocates
>>>>>>      virtqueue structures;
>>>>> Yes, it is. However, it's not that right to do that (you see there is
>>>>> a FIXME in vhost_user_set_vring_call()).
>>>> I suppose you are specifying vhost_set_vring_call().
>>> Oh, I was talking about the new code: I have renamed it to
>>> vhost_user_set_vring_call :)
>>>
>>>>> That means it need be fixed: we should not rely on fact that it's the
>>>>> first per-vring message we will get in the current QEMU implementation
>>>>> as the truth.
>>>>>
>>>>> That also means, naming a function like virtio_user_create_queue() based
>>>>> on above behaviour is wrong.
>>>> It's actually a good catch. After a light thought, I think in DPDK vhost, we
>>>> may need to create those virtqueues once unix socket gets connected, just
>>>> like in vhost-net, virtqueues are created on char file open. Right?
>>> There is a difference: for vhost-net and tap mode, IIRC, it knows how
>>> many queues before doing setup.
>> No, from linux/drivers/vhost/net.c:vhost_net_open(), we can see that
>> virtqueues are allocated according to VHOST_NET_VQ_MAX.
> Well, if you took a closer look, you will find VHOST_NET_VQ_MAX is
> defined to 2. That means it allocates a queue-pair only.
>
> And FYI, the MQ support for vhost-net is actually done in the tap
> driver, but not in vhost-net driver. That results to the MQ
> implementation is a bit different between vhost-net and vhost-user.
>
> Put simply, in vhost-net, one queue-pair has a backend fd associated
> with it. Vhost requests for different queue-pair are sent by different
> fd. That also means the vhost-net doesn't even need be aware of the
> MQ stuff.
>
> However, for vhost-user implementation, all queue-pairs share one
> socket fd. All requests all also sent over the single socket fd,
> thus the backend (DPDK vhost) has to figure out how many queue
> pairs are actually enabled: and we detect it by reading the
> vring index of SET_VRING_CALL message; it's not quite right though.

Aha, right, nice analysis.

>
>> How about reconsidering previous suggestion to allocate vq once connection
>> is established?
> That's too much, because DPDK claims to support up to 0x8000
> queue-pairs. Don't even to say that the vhost_virtqueue struct
> was way too big before: it even holds an array of buf_vec with
> size 256.

Another mistake of my memory, I was remember it wrongly as only 8 VQs 
are supported.
One thing not related, provided that VHOST_MAX_QUEUE_PAIRS equals to 
0x8000, struct vhost_virtqueue  *virtqueue[VHOST_MAX_QUEUE_PAIRS * 2] 
spends 4MB for each virtio device, which could be a refined.

>
>> Never mind, above fix on the vhost side will not take effect on existing
>> vpp-vhost implementations.
> Actually, I was talking about the DPDK vhost implementation :)

This patch is talking about vpp's native vhost implementation, not 
dpdk-vhost, and not the way vpp uses dpdk-vhost.

>
>> We still need to fix it in the virtio side.
> Yes, we could fix it in our side, even though VPP is broken.

OK, let's back to this patch.

>
>>>   While for vhost-user, it doesn't. That
>>> means, we have to allocate and setup virtqueues reactively: just like
>>> what we have done in vhost_set_vring_call(). What doesn't look perfect
>>> is it assume SET_VRING_CALL is the first per-vring message we will get.
>> Yes, depending on the assumption that SET_VRING_CALL is the first per-vring
>> message, looks like a bad implementation. As Stephen has suggested, it's
>> more like a bug in vpp. If we treat it like that way, we will fix nothing
>> here.
>>
>>
>>>>>>    - step 1, send VHOST_USER_SET_FEATURES to confirm the features;
>>>>>>    - step 2, send VHOST_USER_SET_MEM_TABLE to share mem regions;
>>>>>>    - step 3, send VHOST_USER_SET_VRING_NUM, VHOST_USER_SET_VRING_BASE,
>>>>>>      VHOST_USER_SET_VRING_ADDR, VHOST_USER_SET_VRING_KICK for each
>>>>>>      queue;
>>>>>>    - ...
>>>>>>
>>>>>> Fixes: 37a7eb2ae816 ("net/virtio-user: add device emulation layer")
>>>>>>
>>>>>> Reported-by: Zhihong Wang <zhihong.wang@intel.com>
>>>>>> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
>>>>>> ---
>>>>>>   drivers/net/virtio/virtio_user/virtio_user_dev.c | 120 ++++++++++++++---------
>>>>>>   1 file changed, 72 insertions(+), 48 deletions(-)
>>>>> That's too much of code for a bug fix. I'm wondering how about just
>>>>> moving VHOST_USER_GET_PROTOCOL_FEATURES ahead, to the begining of
>>>>> virtio_user_start_device()? It should fix this issue.
>>>> Why does VHOST_USER_GET_PROTOCOL_FEATURES care? Do you mean shifting
>>>> VHOST_USER_SET_FEATURES earlier?
>>> Oops, right, I meant SET_FEATURES. Sorry about confusion introduced by
>>> the silly auto-completion.
>> Still not working. VPP needs SET_VRING_CALL to create vq firstly.
> Didn't get it. In the proposal, SET_FEATURES is sent before every other
> messages, thus it should not cause the issue you described in this patch.

OK. Let me try to explain. We take three vhost implementations into 
consideration: dpdk-2.2-vhost, dpdk-master-vhost, vpp-native-vhost.

If set_feature before set_vring_call, dpdk-2.2-vhost will fail: inside 
set_feature handler, assigning header length to VQs which will be 
created in set_vring_call handler.
So we need to keep set_vring_call firstly. Then set_feature needs to be 
sent before any other msgs, this is what vpp-native-vhost requires. In 
all, the sequence is like this:
1. set_vring_call,
2. set_feature,
3. other msgs

> Besides, haven't we already sent SET_VRING_CALL before other messages
> (well, execept the SET_FEATURES and SET_MEM_TABLE message)?

Yes, set_vring_call is already in the first place, but we need to plugin 
set_feature between set_vring_call and other msgs. Previously, 
set_vring_call and other msgs are together.

Thanks,
Jianfeng

> That's still
> not enough for vpp's native vhost-user implementation?

> 	--yliu

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages
  2016-09-09  3:59             ` Tan, Jianfeng
@ 2016-09-09  4:19               ` Yuanhan Liu
  2016-09-09  5:50                 ` Tan, Jianfeng
  0 siblings, 1 reply; 26+ messages in thread
From: Yuanhan Liu @ 2016-09-09  4:19 UTC (permalink / raw)
  To: Tan, Jianfeng; +Cc: dev, zhihong.wang, lining18

On Fri, Sep 09, 2016 at 11:59:18AM +0800, Tan, Jianfeng wrote:
>                 It's actually a good catch. After a light thought, I think in DPDK vhost, we
>                 may need to create those virtqueues once unix socket gets connected, just
>                 like in vhost-net, virtqueues are created on char file open. Right?
> 
>             There is a difference: for vhost-net and tap mode, IIRC, it knows how
>             many queues before doing setup.
> 
>         No, from linux/drivers/vhost/net.c:vhost_net_open(), we can see that
>         virtqueues are allocated according to VHOST_NET_VQ_MAX.
> 
>     Well, if you took a closer look, you will find VHOST_NET_VQ_MAX is
>     defined to 2. That means it allocates a queue-pair only.
> 
>     And FYI, the MQ support for vhost-net is actually done in the tap
>     driver, but not in vhost-net driver. That results to the MQ
>     implementation is a bit different between vhost-net and vhost-user.
> 
>     Put simply, in vhost-net, one queue-pair has a backend fd associated
>     with it. Vhost requests for different queue-pair are sent by different
>     fd. That also means the vhost-net doesn't even need be aware of the
>     MQ stuff.
> 
>     However, for vhost-user implementation, all queue-pairs share one
>     socket fd. All requests all also sent over the single socket fd,
>     thus the backend (DPDK vhost) has to figure out how many queue
>     pairs are actually enabled: and we detect it by reading the
>     vring index of SET_VRING_CALL message; it's not quite right though.
> 
> 
> Aha, right, nice analysis.

Just some rough memory in my mind ;-)
> 
> 
> 
>         How about reconsidering previous suggestion to allocate vq once connection
>         is established?
> 
>     That's too much, because DPDK claims to support up to 0x8000
>     queue-pairs. Don't even to say that the vhost_virtqueue struct
>     was way too big before: it even holds an array of buf_vec with
>     size 256.
> 
> 
> Another mistake of my memory, I was remember it wrongly as only 8 VQs are
> supported.
> One thing not related, provided that VHOST_MAX_QUEUE_PAIRS equals to 0x8000,
> struct vhost_virtqueue  *virtqueue[VHOST_MAX_QUEUE_PAIRS * 2] spends 4MB for
> each virtio device, which could be a refined.

Yes, we could allocate a small array first, and then re-allocate if
necessary.

> 
> 
> 
>         Never mind, above fix on the vhost side will not take effect on existing
>         vpp-vhost implementations.
> 
>     Actually, I was talking about the DPDK vhost implementation :)
> 
> 
> This patch is talking about vpp's native vhost implementation, not dpdk-vhost,
> and not the way vpp uses dpdk-vhost.

Yes, I know. What I meant is there was a "workaround" in DPDK vhost
implementation, and since you bring this issue on the table again,
it's a chance to think about how can we fix it.

A rough idea come to my mind is we could check all the per-vring message
at the begining of vhost_user_msg_handler() and allocate related vq when
necessary (when it's the first vring message we got).

Yeah, I know it's a bit ugly, but it at least gets rid of that "not-that-true"
assumption.

> 
>         Still not working. VPP needs SET_VRING_CALL to create vq firstly.
> 
>     Didn't get it. In the proposal, SET_FEATURES is sent before every other
>     messages, thus it should not cause the issue you described in this patch.
> 
> 
> OK. Let me try to explain. We take three vhost implementations into
> consideration: dpdk-2.2-vhost, dpdk-master-vhost, vpp-native-vhost.
> 
> If set_feature before set_vring_call, dpdk-2.2-vhost will fail: inside
> set_feature handler, assigning header length to VQs which will be created in
> set_vring_call handler.

Oh, right. That was an in-correct implementation.

> So we need to keep set_vring_call firstly.
> Then set_feature needs to be sent
> before any other msgs, this is what vpp-native-vhost requires. In all, the
> sequence is like this:
> 1. set_vring_call,
> 2. set_feature,
> 3. other msgs
> 
> 
>     Besides, haven't we already sent SET_VRING_CALL before other messages
>     (well, execept the SET_FEATURES and SET_MEM_TABLE message)?
> 
> 
> Yes, set_vring_call is already in the first place, but we need to plugin
> set_feature between set_vring_call and other msgs. Previously, set_vring_call
> and other msgs are together.

Okay. Another thing I noticed is that virtio-user lacks some feature
negotiations, like GET_FEATURES and GET_PROTOCOL_FEATURES. I think you
might need add them back somewhen?

	--yliu

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages
  2016-09-09  4:19               ` Yuanhan Liu
@ 2016-09-09  5:50                 ` Tan, Jianfeng
  2016-09-09  6:03                   ` Yuanhan Liu
  0 siblings, 1 reply; 26+ messages in thread
From: Tan, Jianfeng @ 2016-09-09  5:50 UTC (permalink / raw)
  To: Yuanhan Liu; +Cc: dev, zhihong.wang, lining18

On 9/9/2016 12:19 PM, Yuanhan Liu wrote:

>>
>>
>>          Never mind, above fix on the vhost side will not take effect on existing
>>          vpp-vhost implementations.
>>
>>      Actually, I was talking about the DPDK vhost implementation :)
>>
>>
>> This patch is talking about vpp's native vhost implementation, not dpdk-vhost,
>> and not the way vpp uses dpdk-vhost.
> Yes, I know. What I meant is there was a "workaround" in DPDK vhost
> implementation, and since you bring this issue on the table again,
> it's a chance to think about how can we fix it.
>
> A rough idea come to my mind is we could check all the per-vring message
> at the begining of vhost_user_msg_handler() and allocate related vq when
> necessary (when it's the first vring message we got).
>
> Yeah, I know it's a bit ugly, but it at least gets rid of that "not-that-true"
> assumption.

Sounds workable. So we'd define those vq-specific msgs, like:
VHOST_USER_SET_VRING_NUM,
VHOST_USER_SET_VRING_ADDR,
VHOST_USER_SET_VRING_BASE,
VHOST_USER_GET_VRING_BASE(?),
VHOST_USER_SET_VRING_KICK,
VHOST_USER_SET_VRING_CALL,
VHOST_USER_SET_VRING_ENABLE,


>
>>          Still not working. VPP needs SET_VRING_CALL to create vq firstly.
>>
>>      Didn't get it. In the proposal, SET_FEATURES is sent before every other
>>      messages, thus it should not cause the issue you described in this patch.
>>
>>
>> OK. Let me try to explain. We take three vhost implementations into
>> consideration: dpdk-2.2-vhost, dpdk-master-vhost, vpp-native-vhost.
>>
>> If set_feature before set_vring_call, dpdk-2.2-vhost will fail: inside
>> set_feature handler, assigning header length to VQs which will be created in
>> set_vring_call handler.
> Oh, right. That was an in-correct implementation.
>
>> So we need to keep set_vring_call firstly.
>> Then set_feature needs to be sent
>> before any other msgs, this is what vpp-native-vhost requires. In all, the
>> sequence is like this:
>> 1. set_vring_call,
>> 2. set_feature,
>> 3. other msgs
>>
>>
>>      Besides, haven't we already sent SET_VRING_CALL before other messages
>>      (well, execept the SET_FEATURES and SET_MEM_TABLE message)?
>>
>>
>> Yes, set_vring_call is already in the first place, but we need to plugin
>> set_feature between set_vring_call and other msgs. Previously, set_vring_call
>> and other msgs are together.
> Okay. Another thing I noticed is that virtio-user lacks some feature
> negotiations, like GET_FEATURES and GET_PROTOCOL_FEATURES. I think you
> might need add them back somewhen?

GET_FEATURES has been done in virtio_user_dev_init(). 
GET_PROTOCOL_FEATURES is not supported yet. I see those features in 
PROTOCOL_FEATURES is for live migration (right?). Assuming that, anyone 
using container/process now enables live migration so far? I don't think so.

Thanks,
Jianfeng


>
> 	--yliu

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages
  2016-09-09  5:50                 ` Tan, Jianfeng
@ 2016-09-09  6:03                   ` Yuanhan Liu
  2016-09-09  6:24                     ` Tan, Jianfeng
  0 siblings, 1 reply; 26+ messages in thread
From: Yuanhan Liu @ 2016-09-09  6:03 UTC (permalink / raw)
  To: Tan, Jianfeng; +Cc: dev, zhihong.wang, lining18

On Fri, Sep 09, 2016 at 01:50:16PM +0800, Tan, Jianfeng wrote:
> On 9/9/2016 12:19 PM, Yuanhan Liu wrote:
> 
> >>
> >>
> >>         Never mind, above fix on the vhost side will not take effect on existing
> >>         vpp-vhost implementations.
> >>
> >>     Actually, I was talking about the DPDK vhost implementation :)
> >>
> >>
> >>This patch is talking about vpp's native vhost implementation, not dpdk-vhost,
> >>and not the way vpp uses dpdk-vhost.
> >Yes, I know. What I meant is there was a "workaround" in DPDK vhost
> >implementation, and since you bring this issue on the table again,
> >it's a chance to think about how can we fix it.
> >
> >A rough idea come to my mind is we could check all the per-vring message
> >at the begining of vhost_user_msg_handler() and allocate related vq when
> >necessary (when it's the first vring message we got).
> >
> >Yeah, I know it's a bit ugly, but it at least gets rid of that "not-that-true"
> >assumption.
> 
> Sounds workable. So we'd define those vq-specific msgs, like:
> VHOST_USER_SET_VRING_NUM,
> VHOST_USER_SET_VRING_ADDR,
> VHOST_USER_SET_VRING_BASE,
> VHOST_USER_GET_VRING_BASE(?),
> VHOST_USER_SET_VRING_KICK,
> VHOST_USER_SET_VRING_CALL,
> VHOST_USER_SET_VRING_ENABLE,

Yes.

> >>         Still not working. VPP needs SET_VRING_CALL to create vq firstly.
> >>
> >>     Didn't get it. In the proposal, SET_FEATURES is sent before every other
> >>     messages, thus it should not cause the issue you described in this patch.
> >>
> >>
> >>OK. Let me try to explain. We take three vhost implementations into
> >>consideration: dpdk-2.2-vhost, dpdk-master-vhost, vpp-native-vhost.
> >>
> >>If set_feature before set_vring_call, dpdk-2.2-vhost will fail: inside
> >>set_feature handler, assigning header length to VQs which will be created in
> >>set_vring_call handler.
> >Oh, right. That was an in-correct implementation.
> >
> >>So we need to keep set_vring_call firstly.
> >>Then set_feature needs to be sent
> >>before any other msgs, this is what vpp-native-vhost requires. In all, the
> >>sequence is like this:
> >>1. set_vring_call,
> >>2. set_feature,
> >>3. other msgs
> >>
> >>
> >>     Besides, haven't we already sent SET_VRING_CALL before other messages
> >>     (well, execept the SET_FEATURES and SET_MEM_TABLE message)?
> >>
> >>
> >>Yes, set_vring_call is already in the first place, but we need to plugin
> >>set_feature between set_vring_call and other msgs. Previously, set_vring_call
> >>and other msgs are together.
> >Okay. Another thing I noticed is that virtio-user lacks some feature
> >negotiations, like GET_FEATURES and GET_PROTOCOL_FEATURES. I think you
> >might need add them back somewhen?
> 
> GET_FEATURES has been done in virtio_user_dev_init().

Oh, sorry, I missed that.

> GET_PROTOCOL_FEATURES
> is not supported yet. I see those features in PROTOCOL_FEATURES is for live
> migration (right?).

Not exactly. PROTOCOL_FEATURES was firstly introduced while MQ was
enabled. Thus it's no wonder MQ is the first protocol feature vhost
user supports:

    [yliu@yliu-dev ~/dpdk]$ gg PROTOCOL_F_ lib/librte_vhost/
    lib/librte_vhost/vhost_user.h:46:#define VHOST_USER_PROTOCOL_F_MQ 0
    lib/librte_vhost/vhost_user.h:47:#define VHOST_USER_PROTOCOL_F_LOG_SHMFD 1
    lib/librte_vhost/vhost_user.h:48:#define VHOST_USER_PROTOCOL_F_RARP 2

	--yliu

> Assuming that, anyone using container/process now
> enables live migration so far? I don't think so.
> 
> Thanks,
> Jianfeng
> 
> 
> >
> >	--yliu

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages
  2016-09-09  6:03                   ` Yuanhan Liu
@ 2016-09-09  6:24                     ` Tan, Jianfeng
  2016-09-09  6:31                       ` Yuanhan Liu
  0 siblings, 1 reply; 26+ messages in thread
From: Tan, Jianfeng @ 2016-09-09  6:24 UTC (permalink / raw)
  To: Yuanhan Liu; +Cc: dev, zhihong.wang, lining18



On 9/9/2016 2:03 PM, Yuanhan Liu wrote:
>> GET_PROTOCOL_FEATURES
>> is not supported yet. I see those features in PROTOCOL_FEATURES is for live
>> migration (right?).
> Not exactly. PROTOCOL_FEATURES was firstly introduced while MQ was
> enabled. Thus it's no wonder MQ is the first protocol feature vhost
> user supports:
>
>      [yliu@yliu-dev ~/dpdk]$ gg PROTOCOL_F_ lib/librte_vhost/
>      lib/librte_vhost/vhost_user.h:46:#define VHOST_USER_PROTOCOL_F_MQ 0
>      lib/librte_vhost/vhost_user.h:47:#define VHOST_USER_PROTOCOL_F_LOG_SHMFD 1
>      lib/librte_vhost/vhost_user.h:48:#define VHOST_USER_PROTOCOL_F_RARP 2
>
> 	--yliu

OK, I got it. The maximum of queue pair number is now a parameter of 
virtio_user, but we need to depend on PROTOCOL_FEATURES (further, 
VHOST_USER_GET_QUEUE_NUM) to maximum queue pair number that vhost can 
support.

Just wonder why not QEMU depends on (1ULL << VIRTIO_NET_F_MQ) inside 
features to do that?

Thanks,
Jianfeng

>
>> Assuming that, anyone using container/process now
>> enables live migration so far? I don't think so.
>>
>> Thanks,
>> Jianfeng
>>
>>
>>> 	--yliu

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages
  2016-09-09  6:24                     ` Tan, Jianfeng
@ 2016-09-09  6:31                       ` Yuanhan Liu
  0 siblings, 0 replies; 26+ messages in thread
From: Yuanhan Liu @ 2016-09-09  6:31 UTC (permalink / raw)
  To: Tan, Jianfeng; +Cc: dev, zhihong.wang, lining18

On Fri, Sep 09, 2016 at 02:24:20PM +0800, Tan, Jianfeng wrote:
> 
> 
> On 9/9/2016 2:03 PM, Yuanhan Liu wrote:
> >>GET_PROTOCOL_FEATURES
> >>is not supported yet. I see those features in PROTOCOL_FEATURES is for live
> >>migration (right?).
> >Not exactly. PROTOCOL_FEATURES was firstly introduced while MQ was
> >enabled. Thus it's no wonder MQ is the first protocol feature vhost
> >user supports:
> >
> >     [yliu@yliu-dev ~/dpdk]$ gg PROTOCOL_F_ lib/librte_vhost/
> >     lib/librte_vhost/vhost_user.h:46:#define VHOST_USER_PROTOCOL_F_MQ 0
> >     lib/librte_vhost/vhost_user.h:47:#define VHOST_USER_PROTOCOL_F_LOG_SHMFD 1
> >     lib/librte_vhost/vhost_user.h:48:#define VHOST_USER_PROTOCOL_F_RARP 2
> >
> >	--yliu
> 
> OK, I got it. The maximum of queue pair number is now a parameter of
> virtio_user, but we need to depend on PROTOCOL_FEATURES (further,
> VHOST_USER_GET_QUEUE_NUM) to maximum queue pair number that vhost can
> support.
> 
> Just wonder why not QEMU depends on (1ULL << VIRTIO_NET_F_MQ) inside
> features to do that?

VIRTIO_NET_F_MQ belongs to virtio spec, while VHOST_USER_PROTOCOL_F_MQ
belongs to vhost-user spec.

	--yliu

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [dpdk-dev] [PATCH v2 0/3] fix virtio_user issues
  2016-08-05 11:36 [dpdk-dev] [PATCH 0/3] fix virtio_user issues Jianfeng Tan
                   ` (3 preceding siblings ...)
  2016-08-29  7:01 ` [dpdk-dev] [PATCH 0/3] fix virtio_user issues Christian Ehrhardt
@ 2016-09-27 19:11 ` Jianfeng Tan
  2016-09-27 19:11   ` [dpdk-dev] [PATCH v2 1/3] net/virtio_user: fix queue pair not enabled Jianfeng Tan
                     ` (2 more replies)
  4 siblings, 3 replies; 26+ messages in thread
From: Jianfeng Tan @ 2016-09-27 19:11 UTC (permalink / raw)
  To: dev; +Cc: stable, yuanhan.liu, Jianfeng Tan

v2:
  -- Patch 1: Enable 1st queue pair unconditionaly, instead of depending
  on VHOST_USER_GET_PROTOCOL_FEATURES.
  -- Patch 3: address Stephen's comment on confusion laying in two local
  variables named ret and result. Now we just keep one.

Patch 1: fix issue when using virtio_user with OVS-DPDK.
Patch 2: fix issue when using virtio_user with VPP.
Patch 3: fix issue when failing to start virtio_user devices.

Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>

Jianfeng Tan (3):
  net/virtio_user: fix queue pair not enabled
  net/virtio_user: fix wrong sequence of messages
  net/virtio_user: fix dev not freed after init error

 drivers/net/virtio/virtio_user/virtio_user_dev.c | 112 ++++++++++++++---------
 drivers/net/virtio/virtio_user_ethdev.c          |  42 +++++----
 2 files changed, 96 insertions(+), 58 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [dpdk-dev] [PATCH v2 1/3] net/virtio_user: fix queue pair not enabled
  2016-09-27 19:11 ` [dpdk-dev] [PATCH v2 " Jianfeng Tan
@ 2016-09-27 19:11   ` Jianfeng Tan
  2016-09-28  0:05     ` Yuanhan Liu
  2016-09-27 19:11   ` [dpdk-dev] [PATCH v2 2/3] net/virtio_user: fix wrong sequence of messages Jianfeng Tan
  2016-09-27 19:11   ` [dpdk-dev] [PATCH v2 3/3] net/virtio_user: fix dev not freed after init error Jianfeng Tan
  2 siblings, 1 reply; 26+ messages in thread
From: Jianfeng Tan @ 2016-09-27 19:11 UTC (permalink / raw)
  To: dev; +Cc: stable, yuanhan.liu, Jianfeng Tan

When virtio_user is used with OVS-DPDK (with mq disabled), it cannot
receive any packets.

It's because when vhost provides VHOST_USER_GET_PROTOCOL_FEATURES,
all queue pairs are initialized in the disabled state. Quote
QEMU/docs/specs/vhost-user.txt:
    If VHOST_USER_F_PROTOCOL_FEATURES has not been negotiated, the
    ring is initialized in an enabled state.
    If VHOST_USER_F_PROTOCOL_FEATURES has been negotiated, the ring
    is initialized in a disabled state.

In OVS-DPDK, all queue pairs are in the disabled state by default.
When used with QEMU, QEMU will set it as enabled in the process of
initialization. So this patch is to add similar logic in virtio_user.

Fixes: 37a7eb2ae816 ("net/virtio-user: add device emulation layer")

Reported-by: Ning Li <lining18@jd.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
---
 drivers/net/virtio/virtio_user/virtio_user_dev.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 376c9cf..bf1155c 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -131,6 +131,9 @@ virtio_user_start_device(struct virtio_user_dev *dev)
 		}
 	}
 
+	/* we enable the 1st queue pair by default. */
+	vhost_user_enable_queue_pair(dev->vhostfd, 0, 1);
+
 	/* After setup all virtqueues, we need to set_features so that these
 	 * features can be set into each virtqueue in vhost side. And before
 	 * that, make sure VHOST_USER_F_PROTOCOL_FEATURES is added if mq is
-- 
2.7.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [dpdk-dev] [PATCH v2 2/3] net/virtio_user: fix wrong sequence of messages
  2016-09-27 19:11 ` [dpdk-dev] [PATCH v2 " Jianfeng Tan
  2016-09-27 19:11   ` [dpdk-dev] [PATCH v2 1/3] net/virtio_user: fix queue pair not enabled Jianfeng Tan
@ 2016-09-27 19:11   ` Jianfeng Tan
  2016-09-27 19:11   ` [dpdk-dev] [PATCH v2 3/3] net/virtio_user: fix dev not freed after init error Jianfeng Tan
  2 siblings, 0 replies; 26+ messages in thread
From: Jianfeng Tan @ 2016-09-27 19:11 UTC (permalink / raw)
  To: dev; +Cc: stable, yuanhan.liu, Jianfeng Tan

When virtio_user is used with VPP's native vhost user, it cannot
send/receive any packets.

The root cause is that vpp-vhost-user translates the message
VHOST_USER_SET_FEATURES as puting this device into init state,
aka, zero all related structures. However, previous code
puts this message at last in the whole initialization process,
which leads to all previous information are zeroed.

To fix this issue, we rearrange the sequence of those messages.
  - step 0, send VHOST_USER_SET_VRING_CALL so that vhost allocates
    virtqueue structures;
  - step 1, send VHOST_USER_SET_FEATURES to confirm the features;
  - step 2, send VHOST_USER_SET_MEM_TABLE to share mem regions;
  - step 3, send VHOST_USER_SET_VRING_NUM, VHOST_USER_SET_VRING_BASE,
    VHOST_USER_SET_VRING_ADDR, VHOST_USER_SET_VRING_KICK for each
    queue;
  - ...

Fixes: 37a7eb2ae816 ("net/virtio-user: add device emulation layer")

Reported-by: Zhihong Wang <zhihong.wang@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
---
 drivers/net/virtio/virtio_user/virtio_user_dev.c | 113 ++++++++++++++---------
 1 file changed, 69 insertions(+), 44 deletions(-)

diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index bf1155c..e239e0e 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -45,20 +45,14 @@
 #include "../virtio_ethdev.h"
 
 static int
-virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+virtio_user_create_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
 {
-	int callfd, kickfd;
+	/* Of all per virtqueue MSGs, make sure VHOST_SET_VRING_CALL come
+	 * firstly because vhost depends on this msg to allocate virtqueue
+	 * pair.
+	 */
+	int callfd;
 	struct vhost_vring_file file;
-	struct vhost_vring_state state;
-	struct vring *vring = &dev->vrings[queue_sel];
-	struct vhost_vring_addr addr = {
-		.index = queue_sel,
-		.desc_user_addr = (uint64_t)(uintptr_t)vring->desc,
-		.avail_user_addr = (uint64_t)(uintptr_t)vring->avail,
-		.used_user_addr = (uint64_t)(uintptr_t)vring->used,
-		.log_guest_addr = 0,
-		.flags = 0, /* disable log */
-	};
 
 	/* May use invalid flag, but some backend leverages kickfd and callfd as
 	 * criteria to judge if dev is alive. so finally we use real event_fd.
@@ -68,22 +62,30 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
 		PMD_DRV_LOG(ERR, "callfd error, %s\n", strerror(errno));
 		return -1;
 	}
-	kickfd = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
-	if (kickfd < 0) {
-		close(callfd);
-		PMD_DRV_LOG(ERR, "kickfd error, %s\n", strerror(errno));
-		return -1;
-	}
-
-	/* Of all per virtqueue MSGs, make sure VHOST_SET_VRING_CALL come
-	 * firstly because vhost depends on this msg to allocate virtqueue
-	 * pair.
-	 */
 	file.index = queue_sel;
 	file.fd = callfd;
 	vhost_user_sock(dev->vhostfd, VHOST_USER_SET_VRING_CALL, &file);
 	dev->callfds[queue_sel] = callfd;
 
+	return 0;
+}
+
+static int
+virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+	int kickfd;
+	struct vhost_vring_file file;
+	struct vhost_vring_state state;
+	struct vring *vring = &dev->vrings[queue_sel];
+	struct vhost_vring_addr addr = {
+		.index = queue_sel,
+		.desc_user_addr = (uint64_t)(uintptr_t)vring->desc,
+		.avail_user_addr = (uint64_t)(uintptr_t)vring->avail,
+		.used_user_addr = (uint64_t)(uintptr_t)vring->used,
+		.log_guest_addr = 0,
+		.flags = 0, /* disable log */
+	};
+
 	state.index = queue_sel;
 	state.num = vring->num;
 	vhost_user_sock(dev->vhostfd, VHOST_USER_SET_VRING_NUM, &state);
@@ -97,6 +99,12 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
 	 * lastly because vhost depends on this msg to judge if
 	 * virtio is ready.
 	 */
+	kickfd = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
+	if (kickfd < 0) {
+		PMD_DRV_LOG(ERR, "kickfd error, %s\n", strerror(errno));
+		return -1;
+	}
+	file.index = queue_sel;
 	file.fd = kickfd;
 	vhost_user_sock(dev->vhostfd, VHOST_USER_SET_VRING_KICK, &file);
 	dev->kickfds[queue_sel] = kickfd;
@@ -104,40 +112,43 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
 	return 0;
 }
 
-int
-virtio_user_start_device(struct virtio_user_dev *dev)
+static int
+virtio_user_queue_setup(struct virtio_user_dev *dev,
+			int (*fn)(struct virtio_user_dev *, uint32_t))
 {
-	uint64_t features;
 	uint32_t i, queue_sel;
-	int ret;
-
-	/* construct memory region inside each implementation */
-	ret = vhost_user_sock(dev->vhostfd, VHOST_USER_SET_MEM_TABLE, NULL);
-	if (ret < 0)
-		goto error;
 
 	for (i = 0; i < dev->max_queue_pairs; ++i) {
 		queue_sel = 2 * i + VTNET_SQ_RQ_QUEUE_IDX;
-		if (virtio_user_kick_queue(dev, queue_sel) < 0) {
-			PMD_DRV_LOG(INFO, "kick rx vq fails: %u", i);
-			goto error;
+		if (fn(dev, queue_sel) < 0) {
+			PMD_DRV_LOG(INFO, "setup rx vq fails: %u", i);
+			return -1;
 		}
 	}
 	for (i = 0; i < dev->max_queue_pairs; ++i) {
 		queue_sel = 2 * i + VTNET_SQ_TQ_QUEUE_IDX;
-		if (virtio_user_kick_queue(dev, queue_sel) < 0) {
-			PMD_DRV_LOG(INFO, "kick tx vq fails: %u", i);
-			goto error;
+		if (fn(dev, queue_sel) < 0) {
+			PMD_DRV_LOG(INFO, "setup tx vq fails: %u", i);
+			return -1;
 		}
 	}
 
-	/* we enable the 1st queue pair by default. */
-	vhost_user_enable_queue_pair(dev->vhostfd, 0, 1);
+	return 0;
+}
+
+int
+virtio_user_start_device(struct virtio_user_dev *dev)
+{
+	uint64_t features;
+	int ret;
+
+	/* Step 0: tell vhost to create queues */
+	if (virtio_user_queue_setup(dev, virtio_user_create_queue) < 0)
+		goto error;
 
-	/* After setup all virtqueues, we need to set_features so that these
-	 * features can be set into each virtqueue in vhost side. And before
-	 * that, make sure VHOST_USER_F_PROTOCOL_FEATURES is added if mq is
-	 * enabled, and VIRTIO_NET_F_MAC is stripped.
+	/* Step 1: set features
+	 * Make sure VHOST_USER_F_PROTOCOL_FEATURES is added if mq is enabled,
+	 * and VIRTIO_NET_F_MAC is stripped.
 	 */
 	features = dev->features;
 	if (dev->max_queue_pairs > 1)
@@ -148,6 +159,20 @@ virtio_user_start_device(struct virtio_user_dev *dev)
 		goto error;
 	PMD_DRV_LOG(INFO, "set features: %" PRIx64, features);
 
+	/* Step 2: share memory regions */
+	ret = vhost_user_sock(dev->vhostfd, VHOST_USER_SET_MEM_TABLE, NULL);
+	if (ret < 0)
+		goto error;
+
+	/* Step 3: kick queues */
+	if (virtio_user_queue_setup(dev, virtio_user_kick_queue) < 0)
+		goto error;
+
+	/* Step 4: enable queues
+	 * we enable the 1st queue pair by default.
+	 */
+	vhost_user_enable_queue_pair(dev->vhostfd, 0, 1);
+
 	return 0;
 error:
 	/* TODO: free resource here or caller to check */
-- 
2.7.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [dpdk-dev] [PATCH v2 3/3] net/virtio_user: fix dev not freed after init error
  2016-09-27 19:11 ` [dpdk-dev] [PATCH v2 " Jianfeng Tan
  2016-09-27 19:11   ` [dpdk-dev] [PATCH v2 1/3] net/virtio_user: fix queue pair not enabled Jianfeng Tan
  2016-09-27 19:11   ` [dpdk-dev] [PATCH v2 2/3] net/virtio_user: fix wrong sequence of messages Jianfeng Tan
@ 2016-09-27 19:11   ` Jianfeng Tan
  2 siblings, 0 replies; 26+ messages in thread
From: Jianfeng Tan @ 2016-09-27 19:11 UTC (permalink / raw)
  To: dev; +Cc: stable, yuanhan.liu, Jianfeng Tan

Currently, when virtio_user device fails to be started (e.g., vhost
unix socket does not exit), the init function does not return struct
rte_eth_dev (and some other structs) back to ether layer. And what's
more, it does not report the error to upper layer.

The fix is to free those structs and report error when failing to
start virtio_user devices.

Fixes: ce2eabdd43ec ("net/virtio-user: add virtual device")

Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
---
 drivers/net/virtio/virtio_user_ethdev.c | 42 ++++++++++++++++++++-------------
 1 file changed, 26 insertions(+), 16 deletions(-)

diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index daef09b..bba7402 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -313,6 +313,17 @@ virtio_user_eth_dev_alloc(const char *name)
 	return eth_dev;
 }
 
+static void
+virtio_user_eth_dev_free(struct rte_eth_dev *eth_dev)
+{
+	struct rte_eth_dev_data *data = eth_dev->data;
+	struct virtio_hw *hw = data->dev_private;
+
+	rte_free(hw->virtio_user_dev);
+	rte_free(hw);
+	rte_eth_dev_release_port(eth_dev);
+}
+
 /* Dev initialization routine. Invoked once for each virtio vdev at
  * EAL init time, see rte_eal_dev_init().
  * Returns 0 on success.
@@ -343,9 +354,8 @@ virtio_user_pmd_devinit(const char *name, const char *params)
 	}
 
 	if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_PATH) == 1) {
-		ret = rte_kvargs_process(kvlist, VIRTIO_USER_ARG_PATH,
-					 &get_string_arg, &path);
-		if (ret < 0) {
+		if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_PATH,
+				       &get_string_arg, &path) < 0) {
 			PMD_INIT_LOG(ERR, "error to parse %s",
 				     VIRTIO_USER_ARG_PATH);
 			goto end;
@@ -357,9 +367,8 @@ virtio_user_pmd_devinit(const char *name, const char *params)
 	}
 
 	if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_MAC) == 1) {
-		ret = rte_kvargs_process(kvlist, VIRTIO_USER_ARG_MAC,
-					 &get_string_arg, &mac_addr);
-		if (ret < 0) {
+		if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_MAC,
+				       &get_string_arg, &mac_addr) < 0) {
 			PMD_INIT_LOG(ERR, "error to parse %s",
 				     VIRTIO_USER_ARG_MAC);
 			goto end;
@@ -367,9 +376,8 @@ virtio_user_pmd_devinit(const char *name, const char *params)
 	}
 
 	if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_QUEUE_SIZE) == 1) {
-		ret = rte_kvargs_process(kvlist, VIRTIO_USER_ARG_QUEUE_SIZE,
-					 &get_integer_arg, &queue_size);
-		if (ret < 0) {
+		if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_QUEUE_SIZE,
+				       &get_integer_arg, &queue_size) < 0) {
 			PMD_INIT_LOG(ERR, "error to parse %s",
 				     VIRTIO_USER_ARG_QUEUE_SIZE);
 			goto end;
@@ -377,9 +385,8 @@ virtio_user_pmd_devinit(const char *name, const char *params)
 	}
 
 	if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_QUEUES_NUM) == 1) {
-		ret = rte_kvargs_process(kvlist, VIRTIO_USER_ARG_QUEUES_NUM,
-					 &get_integer_arg, &queues);
-		if (ret < 0) {
+		if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_QUEUES_NUM,
+				       &get_integer_arg, &queues) < 0) {
 			PMD_INIT_LOG(ERR, "error to parse %s",
 				     VIRTIO_USER_ARG_QUEUES_NUM);
 			goto end;
@@ -387,9 +394,8 @@ virtio_user_pmd_devinit(const char *name, const char *params)
 	}
 
 	if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_CQ_NUM) == 1) {
-		ret = rte_kvargs_process(kvlist, VIRTIO_USER_ARG_CQ_NUM,
-					 &get_integer_arg, &cq);
-		if (ret < 0) {
+		if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_CQ_NUM,
+				       &get_integer_arg, &cq) < 0) {
 			PMD_INIT_LOG(ERR, "error to parse %s",
 				     VIRTIO_USER_ARG_CQ_NUM);
 			goto end;
@@ -411,12 +417,16 @@ virtio_user_pmd_devinit(const char *name, const char *params)
 
 	hw = eth_dev->data->dev_private;
 	if (virtio_user_dev_init(hw->virtio_user_dev, path, queues, cq,
-				 queue_size, mac_addr) < 0)
+				 queue_size, mac_addr) < 0) {
+		PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
+		virtio_user_eth_dev_free(eth_dev);
 		goto end;
+	}
 
 	/* previously called by rte_eal_pci_probe() for physical dev */
 	if (eth_virtio_dev_init(eth_dev) < 0) {
 		PMD_INIT_LOG(ERR, "eth_virtio_dev_init fails");
+		virtio_user_eth_dev_free(eth_dev);
 		goto end;
 	}
 	ret = 0;
-- 
2.7.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/3] net/virtio_user: fix queue pair not enabled
  2016-09-27 19:11   ` [dpdk-dev] [PATCH v2 1/3] net/virtio_user: fix queue pair not enabled Jianfeng Tan
@ 2016-09-28  0:05     ` Yuanhan Liu
  0 siblings, 0 replies; 26+ messages in thread
From: Yuanhan Liu @ 2016-09-28  0:05 UTC (permalink / raw)
  To: Jianfeng Tan; +Cc: dev, stable

On Tue, Sep 27, 2016 at 07:11:04PM +0000, Jianfeng Tan wrote:
> When virtio_user is used with OVS-DPDK (with mq disabled), it cannot
> receive any packets.
> 
> It's because when vhost provides VHOST_USER_GET_PROTOCOL_FEATURES,
> all queue pairs are initialized in the disabled state. Quote
> QEMU/docs/specs/vhost-user.txt:
>     If VHOST_USER_F_PROTOCOL_FEATURES has not been negotiated, the
>     ring is initialized in an enabled state.
>     If VHOST_USER_F_PROTOCOL_FEATURES has been negotiated, the ring
>     is initialized in a disabled state.

As stated before, it has nothing to do with VHOST_USER_F_PROTOCOL_FEATURES.
You seems forgot to change the commit log.
>
> In OVS-DPDK, all queue pairs are in the disabled state by default.
> When used with QEMU, QEMU will set it as enabled in the process of
> initialization. So this patch is to add similar logic in virtio_user.

I would reword the commit log to someting like following:

    When virtio_user is used with OVS-DPDK (with mq disabled), it cannot
    receive any packets. This is because no queue is enabled at all when
    mq is disabled.
    
    To fix it, we should consistently make sure the 1st queue is
    enabled,which is also the behaviour QEMU takes.
 
> Fixes: 37a7eb2ae816 ("net/virtio-user: add device emulation layer")
> 
> Reported-by: Ning Li <lining18@jd.com>
> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>

Series applied to dpdk-next-virtio, with above commit reword.

Thanks.

	--yliu

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2016-09-28  0:04 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-05 11:36 [dpdk-dev] [PATCH 0/3] fix virtio_user issues Jianfeng Tan
2016-08-05 11:36 ` [dpdk-dev] [PATCH 1/3] net/virtio_user: fix queue pair not enabled Jianfeng Tan
2016-09-06  6:30   ` Yuanhan Liu
2016-08-05 11:36 ` [dpdk-dev] [PATCH 2/3] net/virtio_user: fix wrong sequence of messages Jianfeng Tan
2016-08-05 16:36   ` Stephen Hemminger
2016-08-08  1:19     ` Tan, Jianfeng
2016-09-06  6:42   ` Yuanhan Liu
2016-09-06  7:54     ` Tan, Jianfeng
2016-09-06  8:20       ` Yuanhan Liu
2016-09-08  8:53         ` Tan, Jianfeng
2016-09-08 12:18           ` Yuanhan Liu
2016-09-09  3:59             ` Tan, Jianfeng
2016-09-09  4:19               ` Yuanhan Liu
2016-09-09  5:50                 ` Tan, Jianfeng
2016-09-09  6:03                   ` Yuanhan Liu
2016-09-09  6:24                     ` Tan, Jianfeng
2016-09-09  6:31                       ` Yuanhan Liu
2016-08-05 11:36 ` [dpdk-dev] [PATCH 3/3] net/virtio_user: fix dev not freed after init error Jianfeng Tan
2016-08-05 16:34   ` Stephen Hemminger
2016-08-08  1:07     ` Tan, Jianfeng
2016-08-29  7:01 ` [dpdk-dev] [PATCH 0/3] fix virtio_user issues Christian Ehrhardt
2016-09-27 19:11 ` [dpdk-dev] [PATCH v2 " Jianfeng Tan
2016-09-27 19:11   ` [dpdk-dev] [PATCH v2 1/3] net/virtio_user: fix queue pair not enabled Jianfeng Tan
2016-09-28  0:05     ` Yuanhan Liu
2016-09-27 19:11   ` [dpdk-dev] [PATCH v2 2/3] net/virtio_user: fix wrong sequence of messages Jianfeng Tan
2016-09-27 19:11   ` [dpdk-dev] [PATCH v2 3/3] net/virtio_user: fix dev not freed after init error Jianfeng Tan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).