From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4BE77A04DD; Wed, 28 Oct 2020 10:08:00 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C6978C906; Wed, 28 Oct 2020 10:03:37 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id AFC38C6C8 for ; Wed, 28 Oct 2020 10:02:19 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 28 Oct 2020 11:00:44 +0200 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09S90Jm6024495; Wed, 28 Oct 2020 11:00:34 +0200 From: Suanming Mou To: Matan Azrad , Shahaf Shuler , Viacheslav Ovsiienko Cc: dev@dpdk.org, rasland@nvidia.com, Xueming Li Date: Wed, 28 Oct 2020 16:59:47 +0800 Message-Id: <1603875616-272798-7-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1603875616-272798-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> <1603875616-272798-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH v5 06/34] net/mlx5: make rte flow list thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li To support multi-thread flow operations, this patch introduces list lock for the rte_flow list manages all the rte_flow handlers. Signed-off-by: Xueming Li Acked-by: Matan Azrad --- drivers/net/mlx5/linux/mlx5_os.c | 1 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.c | 10 ++++++++-- 3 files changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 0b59e74..a579dde 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1358,6 +1358,7 @@ MLX5_MAX_MAC_ADDRESSES); priv->flows = 0; priv->ctrl_flows = 0; + rte_spinlock_init(&priv->flow_list_lock); TAILQ_INIT(&priv->flow_meters); TAILQ_INIT(&priv->flow_meter_profiles); /* Hint libmlx5 to use PMD allocator for data plane resources */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 5bda233..9ab2976 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -856,6 +856,7 @@ struct mlx5_priv { struct mlx5_drop drop_queue; /* Flow drop queues. */ uint32_t flows; /* RTE Flow rules. */ uint32_t ctrl_flows; /* Control flow rules. */ + rte_spinlock_t flow_list_lock; struct mlx5_obj_ops obj_ops; /* HW objects operations. */ LIST_HEAD(rxq, mlx5_rxq_ctrl) rxqsctrl; /* DPDK Rx queues. */ LIST_HEAD(rxqobj, mlx5_rxq_obj) rxqsobj; /* Verbs/DevX Rx queues. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index ed2acd1..cc31801 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -5774,9 +5774,12 @@ struct tunnel_default_miss_ctx { if (ret < 0) goto error; } - if (list) + if (list) { + rte_spinlock_lock(&priv->flow_list_lock); ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], list, idx, flow, next); + rte_spinlock_unlock(&priv->flow_list_lock); + } flow_rxq_flags_set(dev, flow); rte_free(translated_actions); /* Nested flow creation index recovery. */ @@ -5957,9 +5960,12 @@ struct rte_flow * if (dev->data->dev_started) flow_rxq_flags_trim(dev, flow); flow_drv_destroy(dev, flow); - if (list) + if (list) { + rte_spinlock_lock(&priv->flow_list_lock); ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], list, flow_idx, flow, next); + rte_spinlock_unlock(&priv->flow_list_lock); + } flow_mreg_del_copy_action(dev, flow); if (flow->fdir) { LIST_FOREACH(priv_fdir_flow, &priv->fdir_flows, next) { -- 1.8.3.1