From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E075FA04BB; Tue, 6 Oct 2020 13:58:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 04B3E1BC86; Tue, 6 Oct 2020 13:50:05 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 665E71BC6F for ; Tue, 6 Oct 2020 13:50:00 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:55 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0i028553; Tue, 6 Oct 2020 14:49:54 +0300 From: Suanming Mou To: viacheslavo@nvidia.com, matan@nvidia.com Cc: rasland@nvidia.com, dev@dpdk.org Date: Tue, 6 Oct 2020 19:49:07 +0800 Message-Id: <1601984948-313027-25-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH 24/25] net/mlx5: make VLAN network interface thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit takes advantage of the atomic operation with the VLAN VM workaround object reference count to make sure the object to be thread safe and not to be created or destroyed in parallel. Signed-off-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_vlan_os.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_vlan_os.c b/drivers/net/mlx5/linux/mlx5_vlan_os.c index 92fc17d..00e160f 100644 --- a/drivers/net/mlx5/linux/mlx5_vlan_os.c +++ b/drivers/net/mlx5/linux/mlx5_vlan_os.c @@ -45,10 +45,11 @@ return; vlan->created = 0; MLX5_ASSERT(vlan_dev[vlan->tag].refcnt); - if (--vlan_dev[vlan->tag].refcnt == 0 && - vlan_dev[vlan->tag].ifindex) { + if (!__atomic_sub_fetch(&vlan_dev[vlan->tag].refcnt, + 1, __ATOMIC_RELAXED)) { mlx5_nl_vlan_vmwa_delete(vmwa, vlan_dev[vlan->tag].ifindex); - vlan_dev[vlan->tag].ifindex = 0; + __atomic_store_n(&vlan_dev[vlan->tag].ifindex, + 0, __ATOMIC_RELAXED); } } @@ -72,16 +73,21 @@ MLX5_ASSERT(priv->vmwa_context); if (vlan->created || !vmwa) return; - if (vlan_dev[vlan->tag].refcnt == 0) { - MLX5_ASSERT(!vlan_dev[vlan->tag].ifindex); + if (__atomic_add_fetch + (&vlan_dev[vlan->tag].refcnt, 1, __ATOMIC_RELAXED) == 1) { + /* Make sure ifindex is destroyed. */ + rte_wait_until_equal_32(&vlan_dev[vlan->tag].ifindex, + 0, __ATOMIC_RELAXED); vlan_dev[vlan->tag].ifindex = mlx5_nl_vlan_vmwa_create(vmwa, vmwa->vf_ifindex, vlan->tag); + if (!vlan_dev[vlan->tag].ifindex) { + __atomic_store_n(&vlan_dev[vlan->tag].refcnt, + 0, __ATOMIC_RELAXED); + return; + } } - if (vlan_dev[vlan->tag].ifindex) { - vlan_dev[vlan->tag].refcnt++; - vlan->created = 1; - } + vlan->created = 1; } /* -- 1.8.3.1