From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8B7A24234F for ; Thu, 12 Oct 2023 15:50:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8021C406A2; Thu, 12 Oct 2023 15:50:42 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 0211A402F2 for ; Thu, 12 Oct 2023 15:50:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697118639; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DU0AUHV2i/c8/SAFTN8TSyjUnw0YonCxl5WZhMSBIQI=; b=RU5PeD62QIdWk81wTnazWbhwjYTrWmeJ8D2IixoLOAWJvThiVJuwhB5TmH9nxgZK4Ed3R0 osD1uSuWcmj37kmEZUwlGWlGu0tT8j4/7bLJjHejyo9719+jFdzuCtCDH/gwMT6Smm0zjt ZxyY3Wyhk8iZp+uPXK0a7WVLpcpY3KQ= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-581-DZHCAuutOXC-0ae9_mVyMQ-1; Thu, 12 Oct 2023 09:50:36 -0400 X-MC-Unique: DZHCAuutOXC-0ae9_mVyMQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CF2F53C170A0; Thu, 12 Oct 2023 13:50:35 +0000 (UTC) Received: from [10.39.208.36] (unknown [10.39.208.36]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0AAC01C060DF; Thu, 12 Oct 2023 13:50:33 +0000 (UTC) Message-ID: Date: Thu, 12 Oct 2023 15:50:32 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [PATCH] vdpa/mlx5: fix unregister kick handler order To: Yajun Wu , matan@nvidia.com, Viacheslav Ovsiienko , Li Zhang Cc: dev@dpdk.org, thomas@monjalon.net, rasland@nvidia.com, roniba@nvidia.com, stable@dpdk.org References: <20230808113221.227319-1-yajunw@nvidia.com> From: Maxime Coquelin In-Reply-To: <20230808113221.227319-1-yajunw@nvidia.com> X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org On 8/8/23 13:32, Yajun Wu wrote: > The mlx5_vdpa_virtq_kick_handler function may still be running and waiting > on virtq->virtq_lock while mlx5_vdpa_cqe_event_unset function is trying to > re-initialize the virtq->virtq_lock. > > This causes mlx5_vdpa_virtq_kick_handler thread can't be wake up and can't > be unregister. Following print may loop forever when calling > rte_vhost_driver_unregister(socket_path): > > mlx5_vdpa: Try again to unregister fd 154 of virtq 11 interrupt > mlx5_vdpa: Try again to unregister fd 154 of virtq 11 interrupt > ... > > The fix is to move mlx5_vdpa_virtq_unregister_intr_handle before > mlx5_vdpa_cqe_event_unset. > > Fixes: 057f7d2084 ("vdpa/mlx5: optimize datapath-control synchronization") > Cc: stable@dpdk.org > > Signed-off-by: Yajun Wu > Acked-by: Matan Azrad > --- > drivers/vdpa/mlx5/mlx5_vdpa.c | 1 + > drivers/vdpa/mlx5/mlx5_vdpa_cthread.c | 1 - > 2 files changed, 1 insertion(+), 1 deletion(-) Applied to nex-virtio/for-next-net. Thanks, Maxime