DPDK patches and discussions
 help / color / mirror / Atom feed
From: Peng He <xnhp0320@gmail.com>
To: dev@dpdk.org, chenbo.xia@intel.com
Cc: stable@dpdk.org
Subject: [dpdk-dev] [PATCH v2] vhost: fix vid allocation race
Date: Mon,  1 Feb 2021 16:48:44 +0800	[thread overview]
Message-ID: <20210201084844.2434-1-hepeng.0320@bytedance.com> (raw)
In-Reply-To: <MN2PR11MB406353097EA1E6C1D051D24B9CB69@MN2PR11MB4063.namprd11.prod.outlook.com>

vhost_new_device might be called in different threads at the same time.
thread 1(config thread)
            rte_vhost_driver_start
               ->vhost_user_start_client
                   ->vhost_user_add_connection
                     -> vhost_new_device

thread 2(vhost-events)
	vhost_user_read_cb
           ->vhost_user_msg_handler (return value < 0)
             -> vhost_user_start_client
                 -> vhost_new_device

So there could be a case that a same vid has been allocated twice, or
some vid might be lost in DPDK lib however still held by the upper
applications.

Another place where race would happen is at the func *vhost_destroy_device*,
but after a detailed investigation, the race does not exist as long as
no two devices have the same vid: Calling vhost_destroy_devices in
different threads with different vids is actually safe.

Fixes: a277c715987 ("vhost: refactor code structure")
Reported-by: Peng He <hepeng.0320@bytedance.com>
Signed-off-by: Fei Chen <chenwei.0515@bytedance.com>
Reviewed-by: Zhihong Wang <wangzhihong.wzh@bytedance.com>
---
 lib/librte_vhost/vhost.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index efb136edd1..52ab93d1ec 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -26,6 +26,7 @@
 #include "vhost_user.h"
 
 struct virtio_net *vhost_devices[MAX_VHOST_DEVICE];
+pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
 
 /* Called with iotlb_lock read-locked */
 uint64_t
@@ -645,6 +646,7 @@ vhost_new_device(void)
 	struct virtio_net *dev;
 	int i;
 
+	pthread_mutex_lock(&vhost_dev_lock);
 	for (i = 0; i < MAX_VHOST_DEVICE; i++) {
 		if (vhost_devices[i] == NULL)
 			break;
@@ -653,6 +655,7 @@ vhost_new_device(void)
 	if (i == MAX_VHOST_DEVICE) {
 		VHOST_LOG_CONFIG(ERR,
 			"Failed to find a free slot for new device.\n");
+		pthread_mutex_unlock(&vhost_dev_lock);
 		return -1;
 	}
 
@@ -660,10 +663,13 @@ vhost_new_device(void)
 	if (dev == NULL) {
 		VHOST_LOG_CONFIG(ERR,
 			"Failed to allocate memory for new dev.\n");
+		pthread_mutex_unlock(&vhost_dev_lock);
 		return -1;
 	}
 
 	vhost_devices[i] = dev;
+	pthread_mutex_unlock(&vhost_dev_lock);
+
 	dev->vid = i;
 	dev->flags = VIRTIO_DEV_BUILTIN_VIRTIO_NET;
 	dev->slave_req_fd = -1;
-- 
2.23.0


  reply	other threads:[~2021-02-01 23:46 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-29  7:35 [dpdk-dev] [PATCH] lib/librte_vhost: " Peng He
2021-02-01  6:27 ` Xia, Chenbo
2021-02-01  8:48   ` Peng He [this message]
2021-02-03  2:44     ` [dpdk-dev] [PATCH v2] vhost: " Xia, Chenbo
2021-02-03 17:21     ` Maxime Coquelin
2021-02-01  8:53   ` [dpdk-dev] [PATCH] lib/librte_vhost: " 贺鹏

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210201084844.2434-1-hepeng.0320@bytedance.com \
    --to=xnhp0320@gmail.com \
    --cc=chenbo.xia@intel.com \
    --cc=dev@dpdk.org \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).