From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2490342B12; Mon, 15 May 2023 13:18:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E5EDA40A80; Mon, 15 May 2023 13:18:58 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id D3E2840687 for ; Mon, 15 May 2023 13:18:56 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684149536; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=y0gETaTl6HRegqbrv6Ud1oPQwBNE1FNqUs2XEws/mVo=; b=b4KM26JNSmNTDNHiplrggZd0kvmcLWpWD55yt1Ie58OZFXVA43IFZ44KN/pbaeAgMKNf+l di+CdgmMNFjO3agmjQVlyqTnER7h6qeVfBda8HmpwUxiRsU4jrBPRvPWG5mIT88LTYNEx3 HMCL/OPT4rr3MTRxkkZqldz3v6D3Yc0= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-197-_k_L4jkIO1abbxcn-l81_g-1; Mon, 15 May 2023 07:18:53 -0400 X-MC-Unique: _k_L4jkIO1abbxcn-l81_g-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B6DEB1C05135; Mon, 15 May 2023 11:18:52 +0000 (UTC) Received: from dmarchan.redhat.com (unknown [10.45.224.247]) by smtp.corp.redhat.com (Postfix) with ESMTP id A4983C15BA0; Mon, 15 May 2023 11:18:51 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: stable@dpdk.org, Maxime Coquelin , Chenbo Xia , Yuanhan Liu Subject: [PATCH v2] vhost: avoid sleeping under mutex Date: Mon, 15 May 2023 13:18:44 +0200 Message-Id: <20230515111844.884784-1-david.marchand@redhat.com> In-Reply-To: <20230322170524.2314715-1-david.marchand@redhat.com> References: <20230322170524.2314715-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Covscan reported: 2. dpdk-21.11/lib/vhost/socket.c:852: lock_acquire: Calling function "pthread_mutex_lock" acquires lock "vhost_user.mutex". 23. dpdk-21.11/lib/vhost/socket.c:955: sleep: Call to "vhost_user_reconnect_init" might sleep while holding lock "vhost_user.mutex". # 953| vsocket->reconnect = !(flags & RTE_VHOST_USER_NO_RECONNECT); # 954| if (vsocket->reconnect && reconn_tid == 0) { # 955|-> if (vhost_user_reconnect_init() != 0) # 956| goto out_mutex; # 957| } The reason for this warning is that vhost_user_reconnect_init() creates a ctrl thread and calls nanosleep waiting for this thread to be ready, while vhost_user.mutex is taken. Move the call to vhost_user_reconnect_init() out of this mutex. While at it, a pthread_t value should be considered opaque. Instead of relying reconn_tid == 0, use an internal flag in vhost_user_reconnect_init(). Coverity issue: 373686 Bugzilla ID: 981 Fixes: e623e0c6d8a5 ("vhost: add reconnect ability") Cc: stable@dpdk.org Signed-off-by: David Marchand --- Changes since v1: - moved reconn_tid in vhost_user_reconnect_init as this variable is not used anywhere else, --- lib/vhost/socket.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c index 669c322e12..00a912c59e 100644 --- a/lib/vhost/socket.c +++ b/lib/vhost/socket.c @@ -427,7 +427,6 @@ struct vhost_user_reconnect_list { }; static struct vhost_user_reconnect_list reconn_list; -static pthread_t reconn_tid; static int vhost_user_connect_nonblock(char *path, int fd, struct sockaddr *un, size_t sz) @@ -498,8 +497,13 @@ vhost_user_client_reconnect(void *arg __rte_unused) static int vhost_user_reconnect_init(void) { + static bool reconn_init_done; + static pthread_t reconn_tid; int ret; + if (reconn_init_done) + return 0; + ret = pthread_mutex_init(&reconn_list.mutex, NULL); if (ret < 0) { VHOST_LOG_CONFIG("thread", ERR, "%s: failed to initialize mutex\n", __func__); @@ -515,6 +519,8 @@ vhost_user_reconnect_init(void) VHOST_LOG_CONFIG("thread", ERR, "%s: failed to destroy reconnect mutex\n", __func__); + } else { + reconn_init_done = true; } return ret; @@ -866,6 +872,11 @@ rte_vhost_driver_register(const char *path, uint64_t flags) if (!path) return -1; + if ((flags & RTE_VHOST_USER_CLIENT) != 0 && + (flags & RTE_VHOST_USER_NO_RECONNECT) == 0 && + vhost_user_reconnect_init() != 0) + return -1; + pthread_mutex_lock(&vhost_user.mutex); if (vhost_user.vsocket_cnt == MAX_VHOST_SOCKET) { @@ -961,11 +972,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) } if ((flags & RTE_VHOST_USER_CLIENT) != 0) { - vsocket->reconnect = !(flags & RTE_VHOST_USER_NO_RECONNECT); - if (vsocket->reconnect && reconn_tid == 0) { - if (vhost_user_reconnect_init() != 0) - goto out_mutex; - } + vsocket->reconnect = (flags & RTE_VHOST_USER_NO_RECONNECT) == 0; } else { vsocket->is_server = true; } -- 2.40.0