patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Gongming Chen <chengongming1900@outlook.com>, chenbox@nvidia.com
Cc: dev@dpdk.org, Gongming Chen <chengm11@chinatelecom.cn>, stable@dpdk.org
Subject: Re: [PATCH] vhost: fix crash caused by accessing a freed vsocket
Date: Wed, 3 Apr 2024 11:39:58 +0200	[thread overview]
Message-ID: <efdffdf4-43c2-4bc9-aae0-56cc51e15505@redhat.com> (raw)
In-Reply-To: <TYAP286MB0649D9A112E62A15C4792253D83D2@TYAP286MB0649.JPNP286.PROD.OUTLOOK.COM>

Hi Gongming,

It's the 9th time the patch has been sent.
I'm not sure whether there are changes between them or these are just
re-sends, but that's something to avoid.

If there are differences, you should use versionning to highlight it.
If unsure, please check the contributions guidelines first.

Regarding the patch itself, I don't know if this is avoidable, but I
would prefer we do not introduce yet another lock in there.

Thanks,
Maxime

On 4/3/24 08:31, Gongming Chen wrote:
> From: Gongming Chen <chengm11@chinatelecom.cn>
> 
> When a vhost user message handling error in the event dispatch thread,
> vsocket reconn is added to the reconnection list of the reconnection
> thread.
> Since the reconnection, event dispatching and app configuration thread
> do not have common thread protection restrictions, the app config
> thread freed vsocket in the rte_vhost_driver_unregister process,
> but vsocket reconn can still exist in the reconn_list through this
> mechanism.
> Then in the reconnection thread, the vsocket is connected again and
> conn is added to the dispatch thread.
> Finally, the vsocket is accessed again in the event dispatch thread,
> resulting in a use-after-free error.
> 
> This patch adds a vhost threads read-write lock to restrict
> reconnection, event dispatching and app configuration threads.
> When the vhost driver unregisters, it exclusively holds the lock to
> safely free the vsocket.
> 
> #0  0x0000000000000025 in ?? ()
> #1  0x0000000003ed7ca0 in vhost_user_read_cb at lib/vhost/socket.c:330
> #2  0x0000000003ed625f in fdset_event_dispatch at lib/vhost/fd_man.c:283
> 
> Fixes: e623e0c6d8a5 ("vhost: add vhost-user client mode")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Gongming Chen <chengm11@chinatelecom.cn>
> ---
>   lib/vhost/fd_man.c       |  3 +++
>   lib/vhost/meson.build    |  1 +
>   lib/vhost/socket.c       | 30 ++++++++++++------------------
>   lib/vhost/vhost_thread.c | 37 +++++++++++++++++++++++++++++++++++++
>   lib/vhost/vhost_thread.h | 16 ++++++++++++++++
>   5 files changed, 69 insertions(+), 18 deletions(-)
>   create mode 100644 lib/vhost/vhost_thread.c
>   create mode 100644 lib/vhost/vhost_thread.h
> 
> diff --git a/lib/vhost/fd_man.c b/lib/vhost/fd_man.c
> index 481e6b900a..b0e0aa2640 100644
> --- a/lib/vhost/fd_man.c
> +++ b/lib/vhost/fd_man.c
> @@ -9,6 +9,7 @@
>   #include <rte_log.h>
>   
>   #include "fd_man.h"
> +#include "vhost_thread.h"
>   
>   RTE_LOG_REGISTER_SUFFIX(vhost_fdset_logtype, fdset, INFO);
>   #define RTE_LOGTYPE_VHOST_FDMAN vhost_fdset_logtype
> @@ -250,6 +251,7 @@ fdset_event_dispatch(void *arg)
>   		if (val < 0)
>   			continue;
>   
> +		vhost_thread_read_lock();
>   		need_shrink = 0;
>   		for (i = 0; i < numfds; i++) {
>   			pthread_mutex_lock(&pfdset->fd_mutex);
> @@ -303,6 +305,7 @@ fdset_event_dispatch(void *arg)
>   
>   		if (need_shrink)
>   			fdset_shrink(pfdset);
> +		vhost_thread_read_unlock();
>   	}
>   
>   	return 0;
> diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build
> index 41b622a9be..7bc1840ed0 100644
> --- a/lib/vhost/meson.build
> +++ b/lib/vhost/meson.build
> @@ -25,6 +25,7 @@ sources = files(
>           'vdpa.c',
>           'vhost.c',
>           'vhost_crypto.c',
> +        'vhost_thread.c',
>           'vhost_user.c',
>           'virtio_net.c',
>           'virtio_net_ctrl.c',
> diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
> index 96b3ab5595..e05d36f549 100644
> --- a/lib/vhost/socket.c
> +++ b/lib/vhost/socket.c
> @@ -20,6 +20,7 @@
>   #include "fd_man.h"
>   #include "vduse.h"
>   #include "vhost.h"
> +#include "vhost_thread.h"
>   #include "vhost_user.h"
>   
>   
> @@ -463,6 +464,7 @@ vhost_user_client_reconnect(void *arg __rte_unused)
>   	struct vhost_user_reconnect *reconn, *next;
>   
>   	while (1) {
> +		vhost_thread_read_lock();
>   		pthread_mutex_lock(&reconn_list.mutex);
>   
>   		/*
> @@ -494,6 +496,7 @@ vhost_user_client_reconnect(void *arg __rte_unused)
>   		}
>   
>   		pthread_mutex_unlock(&reconn_list.mutex);
> +		vhost_thread_read_unlock();
>   		sleep(1);
>   	}
>   
> @@ -1071,7 +1074,7 @@ rte_vhost_driver_unregister(const char *path)
>   	if (path == NULL)
>   		return -1;
>   
> -again:
> +	vhost_thread_write_lock();
>   	pthread_mutex_lock(&vhost_user.mutex);
>   
>   	for (i = 0; i < vhost_user.vsocket_cnt; i++) {
> @@ -1083,14 +1086,10 @@ rte_vhost_driver_unregister(const char *path)
>   			vduse_device_destroy(path);
>   		} else if (vsocket->is_server) {
>   			/*
> -			 * If r/wcb is executing, release vhost_user's
> -			 * mutex lock, and try again since the r/wcb
> -			 * may use the mutex lock.
> +			 * The vhost thread write lock has been acquired,
> +			 * and fd must be deleted from fdset.
>   			 */
> -			if (fdset_try_del(&vhost_user.fdset, vsocket->socket_fd) == -1) {
> -				pthread_mutex_unlock(&vhost_user.mutex);
> -				goto again;
> -			}
> +			fdset_del(&vhost_user.fdset, vsocket->socket_fd);
>   		} else if (vsocket->reconnect) {
>   			vhost_user_remove_reconnect(vsocket);
>   		}
> @@ -1102,17 +1101,10 @@ rte_vhost_driver_unregister(const char *path)
>   			next = TAILQ_NEXT(conn, next);
>   
>   			/*
> -			 * If r/wcb is executing, release vsocket's
> -			 * conn_mutex and vhost_user's mutex locks, and
> -			 * try again since the r/wcb may use the
> -			 * conn_mutex and mutex locks.
> +			 * The vhost thread write lock has been acquired,
> +			 * and fd must be deleted from fdset.
>   			 */
> -			if (fdset_try_del(&vhost_user.fdset,
> -					  conn->connfd) == -1) {
> -				pthread_mutex_unlock(&vsocket->conn_mutex);
> -				pthread_mutex_unlock(&vhost_user.mutex);
> -				goto again;
> -			}
> +			fdset_del(&vhost_user.fdset, conn->connfd);
>   
>   			VHOST_CONFIG_LOG(path, INFO, "free connfd %d", conn->connfd);
>   			close(conn->connfd);
> @@ -1134,9 +1126,11 @@ rte_vhost_driver_unregister(const char *path)
>   		vhost_user.vsockets[i] = vhost_user.vsockets[count];
>   		vhost_user.vsockets[count] = NULL;
>   		pthread_mutex_unlock(&vhost_user.mutex);
> +		vhost_thread_write_unlock();
>   		return 0;
>   	}
>   	pthread_mutex_unlock(&vhost_user.mutex);
> +	vhost_thread_write_unlock();
>   
>   	return -1;
>   }
> diff --git a/lib/vhost/vhost_thread.c b/lib/vhost/vhost_thread.c
> new file mode 100644
> index 0000000000..6b5dc22042
> --- /dev/null
> +++ b/lib/vhost/vhost_thread.c
> @@ -0,0 +1,37 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2024 China Telecom Cloud Technology Co., Ltd
> + */
> +
> +#include <rte_rwlock.h>
> +
> +#include "vhost_thread.h"
> +
> +static rte_rwlock_t vhost_thread_lock = RTE_RWLOCK_INITIALIZER;
> +
> +void
> +vhost_thread_read_lock(void)
> +	__rte_no_thread_safety_analysis
> +{
> +	rte_rwlock_read_lock(&vhost_thread_lock);
> +}
> +
> +void
> +vhost_thread_read_unlock(void)
> +	__rte_no_thread_safety_analysis
> +{
> +	rte_rwlock_read_unlock(&vhost_thread_lock);
> +}
> +
> +void
> +vhost_thread_write_lock(void)
> +	__rte_no_thread_safety_analysis
> +{
> +	rte_rwlock_write_lock(&vhost_thread_lock);
> +}
> +
> +void
> +vhost_thread_write_unlock(void)
> +	__rte_no_thread_safety_analysis
> +{
> +	rte_rwlock_write_unlock(&vhost_thread_lock);
> +}
> diff --git a/lib/vhost/vhost_thread.h b/lib/vhost/vhost_thread.h
> new file mode 100644
> index 0000000000..61679172af
> --- /dev/null
> +++ b/lib/vhost/vhost_thread.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2024 China Telecom Cloud Technology Co., Ltd
> + */
> +
> +#ifndef _VHOST_THREAD_H_
> +#define _VHOST_THREAD_H_
> +
> +void vhost_thread_read_lock(void);
> +
> +void vhost_thread_read_unlock(void);
> +
> +void vhost_thread_write_lock(void);
> +
> +void vhost_thread_write_unlock(void);
> +
> +#endif


  reply	other threads:[~2024-04-03  9:40 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-03  6:31 Gongming Chen
2024-04-03  9:39 ` Maxime Coquelin [this message]
2024-04-03 15:52   ` Gongming Chen
  -- strict thread matches above, loose matches on Subject: below --
2024-04-03  3:25 Gongming Chen
     [not found] <20240402092402.43452-1-chengongming1900@outlook.com>
2024-04-03  2:57 ` Gongming Chen
     [not found] <20240403021231.50421-1-chengongming1900@outlook.com>
2024-04-03  2:31 ` Gongming Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=efdffdf4-43c2-4bc9-aae0-56cc51e15505@redhat.com \
    --to=maxime.coquelin@redhat.com \
    --cc=chenbox@nvidia.com \
    --cc=chengm11@chinatelecom.cn \
    --cc=chengongming1900@outlook.com \
    --cc=dev@dpdk.org \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).