From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 68C9BA0C48; Tue, 13 Jul 2021 10:47:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 90A5A4123D; Tue, 13 Jul 2021 10:45:46 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2081.outbound.protection.outlook.com [40.107.92.81]) by mails.dpdk.org (Postfix) with ESMTP id BEC7641238 for ; Tue, 13 Jul 2021 10:45:44 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=H0rQ34ZIEFAO775VVxBpyIK3hdLSSyvLrwY4DmpQMbzafEgAF2UPrujxYMBKOFv9vgdeux+reLip3PzBHGhteYeq3XlFxmYmsBtSpWyZqAHiRqfZODiQeT5b/C72ohy5dBlWvNo0Sh59tIxVvAqPLCsD1kWnfvCOWaGKcCkmKTCFTMRlWtkiGtybTRGCVrHs3dPqFUibsMsvmVW4qJqTFHdG74RZWk+7dyrzWQR1jVf0IYezkmAhkhW+AGsMV6ZGdSE8Lg9WLNsvcrhXOWxanEysGeqBUUYA7q1L41fx/f/2zh7YSrrF8szNmWOQ0a9N5fQRGiu1dtoMTzexueauAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=xOidqoh1NKmfRkcw9FrFaSQdfesTdOyxUuLtgzPJ9WE=; b=edmAzVL2WbgBsTuYt4pMQPjIPzp9hQzLEQ+J/aoZvRaV34g9oTsdPi4sJVum79G5z1gqjADfCWfqjCGsjZbA6Y/rSK1+WBXFr/kufN8MIJgtkFwctDqdy681cWTBJrH8pUU7mLaZraxM4Xr8V9nmIxwm31MwzsOdSdj7SrPv6gYLyEh2/AnZFCWMQMP6NoSMNA79zM0QjM+KtJ1H1o4H6sDxmtokV943xFeRxB+kCjn1Xc7BoMZOKFBjhZ2ubtpzLtXKy4nUq2sh0P1ApAATdQLBeQjuVgha3/qjTzOiw1Jh0Xa5pEBpQ8DpeNKwL/jHsaO2y9lol/OFPfSz+hwLbw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=xOidqoh1NKmfRkcw9FrFaSQdfesTdOyxUuLtgzPJ9WE=; b=o6PHcJVNQZl1zzmN6SaRA4O/mO3SJmrclorijVvpiKDtKmdNmRMbCbbJZx4XvT3eocpSRKnKd8DbxGkhzCit91t4yj3EHi3QMIuqZIItMaXjX/4wGy0gBYIfeuFpZyATYTQzp5HkIT30259XfD56R3ZSrvQakNF45gcMTQlFOb5R05aV+4IUd1HJZz5rFHOYYDfoR42EZUWwlw0yyGMqmf9Vu6hNEXKEmstX393XxtXu/PiHOejjKVi1w5jQpV1q1Y6S+YgR6PNamLlRqaXHigEhvEEnTGNelQGvLkEjK8hERoHjmj49WuBqyJIr//a4VrFBv8HyJ/lOUM70bv1vzw== Received: from DS7PR03CA0044.namprd03.prod.outlook.com (2603:10b6:5:3b5::19) by BL0PR12MB2337.namprd12.prod.outlook.com (2603:10b6:207:45::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.23; Tue, 13 Jul 2021 08:45:43 +0000 Received: from DM6NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3b5:cafe::e0) by DS7PR03CA0044.outlook.office365.com (2603:10b6:5:3b5::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.20 via Frontend Transport; Tue, 13 Jul 2021 08:45:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT066.mail.protection.outlook.com (10.13.173.179) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4308.20 via Frontend Transport; Tue, 13 Jul 2021 08:45:43 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 13 Jul 2021 08:45:41 +0000 From: Suanming Mou To: , CC: , , Date: Tue, 13 Jul 2021 11:44:48 +0300 Message-ID: <20210713084500.19964-15-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210713084500.19964-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210713084500.19964-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 092f014d-bcc2-402e-0790-08d945da99b2 X-MS-TrafficTypeDiagnostic: BL0PR12MB2337: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:109; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: TCMrqcddXCAF38rZOqtPbG5mowThWTgEEVM/wKL3JA/N8P+paS3tu3yWBHFemZ5IyywITr/mE/KPI5SCQRX8dyWkxKS8Ws3lCmSRpO9H2GMIyuCLvCfQ6XjUzMzKJMzKoMrMXiMtR7z8bXljKiajF7dARfQFOFENe9Kj0KgWkACG2V+YcXFMPsb8D06t0Tj3ZH9n7yboq0VngeMzesUAhH0NZeS0iLmwmNNmNoLkQfSS9gMinX4BkEn1bHifxfXVNS4BRaBNalhONmfFerCkZn+T3XXWNZ4ZFTcjHeOd4t4dl1VQ3OMp9bd56Qrf1dyZ0Vaid5zYOMe+dmNQgw9ymxcBkp5JYjSjeSmVHISLP3MLyInTMGnBdA3/kQwQnggYREBMANMVDH+Ace1G07NvwNqZV2P5mDz283opo9KZtlHcem0F9/Y9o9UVFoy+IG8zrwOQVdSej78ADGKSOZIgbLI1e56vpVzEzZZnhLOdNiGGRFepA45RCDwApRQpBI4ptPubx1mglGEpjOEyhTHrCIO8M+VUQkku1PyK2Cb0IqfOxQfz7ARxbEA8SsYpCRv2KUIRCeCfEvDVY56TR5LIlSSMj3ssr3cbUkmLeT7Oklib0oZ+b58FJxenX7d99TBwZig/W4Gv7pl7PAzPuGMLrcCieW6dgUaTNo9Ip4niH603gppoZ4IbEjOnFtKpEx4xVFwwSoRpKndMlRPzcTMhwYnVonXfOm3swl+ETFiJvhk= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(376002)(396003)(136003)(346002)(39860400002)(46966006)(36840700001)(8936002)(5660300002)(7696005)(110136005)(26005)(6286002)(6636002)(8676002)(36860700001)(47076005)(2906002)(478600001)(426003)(336012)(54906003)(1076003)(2616005)(82740400003)(86362001)(16526019)(7636003)(36906005)(356005)(83380400001)(36756003)(70206006)(186003)(70586007)(55016002)(82310400003)(4326008)(316002)(34020700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jul 2021 08:45:43.1149 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 092f014d-bcc2-402e-0790-08d945da99b2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB2337 Subject: [dpdk-dev] [PATCH v6 14/26] common/mlx5: add list lcore share X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" As some actions in SW-steering is only memory and can be allowed to create duplicate objects, for lists which no need to check if there are existing same objects in other sub local lists, search the object only in local list will be more efficient. This commit adds the lcore share mode to list optimized the list register. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common_utils.c | 48 ++++++++++++++++++------- drivers/common/mlx5/mlx5_common_utils.h | 16 ++++++--- drivers/net/mlx5/linux/mlx5_os.c | 11 +++--- drivers/net/mlx5/mlx5_flow_dv.c | 2 +- drivers/net/mlx5/windows/mlx5_os.c | 2 +- 5 files changed, 56 insertions(+), 23 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_utils.c b/drivers/common/mlx5/mlx5_common_utils.c index 8bb8a6016d..bc08f8ba25 100644 --- a/drivers/common/mlx5/mlx5_common_utils.c +++ b/drivers/common/mlx5/mlx5_common_utils.c @@ -14,7 +14,7 @@ /********************* mlx5 list ************************/ struct mlx5_list * -mlx5_list_create(const char *name, void *ctx, +mlx5_list_create(const char *name, void *ctx, bool lcores_share, mlx5_list_create_cb cb_create, mlx5_list_match_cb cb_match, mlx5_list_remove_cb cb_remove, @@ -35,6 +35,7 @@ mlx5_list_create(const char *name, void *ctx, if (name) snprintf(list->name, sizeof(list->name), "%s", name); list->ctx = ctx; + list->lcores_share = lcores_share; list->cb_create = cb_create; list->cb_match = cb_match; list->cb_remove = cb_remove; @@ -119,7 +120,10 @@ __list_cache_clean(struct mlx5_list *list, int lcore_index) if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED) == 0) { LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); + if (list->lcores_share) + list->cb_clone_free(list, entry); + else + list->cb_remove(list, entry); inv_cnt--; } entry = nentry; @@ -129,7 +133,7 @@ __list_cache_clean(struct mlx5_list *list, int lcore_index) struct mlx5_list_entry * mlx5_list_register(struct mlx5_list *list, void *ctx) { - struct mlx5_list_entry *entry, *local_entry; + struct mlx5_list_entry *entry = NULL, *local_entry; volatile uint32_t prev_gen_cnt = 0; int lcore_index = rte_lcore_index(rte_lcore_id()); @@ -145,25 +149,36 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) local_entry = __list_lookup(list, lcore_index, ctx, true); if (local_entry) return local_entry; - /* 2. Lookup with read lock on global list, reuse if found. */ - rte_rwlock_read_lock(&list->lock); - entry = __list_lookup(list, RTE_MAX_LCORE, ctx, true); - if (likely(entry)) { + if (list->lcores_share) { + /* 2. Lookup with read lock on global list, reuse if found. */ + rte_rwlock_read_lock(&list->lock); + entry = __list_lookup(list, RTE_MAX_LCORE, ctx, true); + if (likely(entry)) { + rte_rwlock_read_unlock(&list->lock); + return mlx5_list_cache_insert(list, lcore_index, entry, + ctx); + } + prev_gen_cnt = list->gen_cnt; rte_rwlock_read_unlock(&list->lock); - return mlx5_list_cache_insert(list, lcore_index, entry, ctx); } - prev_gen_cnt = list->gen_cnt; - rte_rwlock_read_unlock(&list->lock); /* 3. Prepare new entry for global list and for cache. */ entry = list->cb_create(list, entry, ctx); if (unlikely(!entry)) return NULL; + entry->ref_cnt = 1u; + if (!list->lcores_share) { + entry->lcore_idx = (uint32_t)lcore_index; + LIST_INSERT_HEAD(&list->cache[lcore_index].h, entry, next); + __atomic_add_fetch(&list->count, 1, __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "MLX5 list %s c%d entry %p new: %u.", + list->name, lcore_index, (void *)entry, entry->ref_cnt); + return entry; + } local_entry = list->cb_clone(list, entry, ctx); if (unlikely(!local_entry)) { list->cb_remove(list, entry); return NULL; } - entry->ref_cnt = 1u; local_entry->ref_cnt = 1u; local_entry->gentry = entry; local_entry->lcore_idx = (uint32_t)lcore_index; @@ -207,13 +222,22 @@ mlx5_list_unregister(struct mlx5_list *list, MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); if (entry->lcore_idx == (uint32_t)lcore_idx) { LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); + if (list->lcores_share) + list->cb_clone_free(list, entry); + else + list->cb_remove(list, entry); } else if (likely(lcore_idx != -1)) { __atomic_add_fetch(&list->cache[entry->lcore_idx].inv_cnt, 1, __ATOMIC_RELAXED); } else { return 0; } + if (!list->lcores_share) { + __atomic_sub_fetch(&list->count, 1, __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", + list->name, (void *)entry); + return 0; + } if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) return 1; rte_rwlock_write_lock(&list->lock); diff --git a/drivers/common/mlx5/mlx5_common_utils.h b/drivers/common/mlx5/mlx5_common_utils.h index 96add6d003..000279d236 100644 --- a/drivers/common/mlx5/mlx5_common_utils.h +++ b/drivers/common/mlx5/mlx5_common_utils.h @@ -100,11 +100,8 @@ typedef struct mlx5_list_entry *(*mlx5_list_create_cb) */ struct mlx5_list { char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ - volatile uint32_t gen_cnt; - /* List modification will update generation count. */ - volatile uint32_t count; /* number of entries in list. */ void *ctx; /* user objects target to callback. */ - rte_rwlock_t lock; /* read/write lock. */ + bool lcores_share; /* Whether to share objects between the lcores. */ mlx5_list_create_cb cb_create; /**< entry create callback. */ mlx5_list_match_cb cb_match; /**< entry match callback. */ mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ @@ -112,17 +109,27 @@ struct mlx5_list { mlx5_list_clone_free_cb cb_clone_free; struct mlx5_list_cache cache[RTE_MAX_LCORE + 1]; /* Lcore cache, last index is the global cache. */ + volatile uint32_t gen_cnt; /* List modification may update it. */ + volatile uint32_t count; /* number of entries in list. */ + rte_rwlock_t lock; /* read/write lock. */ }; /** * Create a mlx5 list. * + * For actions in SW-steering is only memory and can be allowed + * to create duplicate objects, the lists don't need to check if + * there are existing same objects in other sub local lists, + * search the object only in local list will be more efficient. + * * @param list * Pointer to the hast list table. * @param name * Name of the mlx5 list. * @param ctx * Pointer to the list context data. + * @param lcores_share + * Whether to share objects between the lcores. * @param cb_create * Callback function for entry create. * @param cb_match @@ -134,6 +141,7 @@ struct mlx5_list { */ __rte_internal struct mlx5_list *mlx5_list_create(const char *name, void *ctx, + bool lcores_share, mlx5_list_create_cb cb_create, mlx5_list_match_cb cb_match, mlx5_list_remove_cb cb_remove, diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 2a9a6c3bf8..ce41fb34a0 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -274,7 +274,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) #ifdef HAVE_IBV_FLOW_DV_SUPPORT /* Init port id action list. */ snprintf(s, sizeof(s), "%s_port_id_action_list", sh->ibdev_name); - sh->port_id_action_list = mlx5_list_create(s, sh, + sh->port_id_action_list = mlx5_list_create(s, sh, true, flow_dv_port_id_create_cb, flow_dv_port_id_match_cb, flow_dv_port_id_remove_cb, @@ -284,7 +284,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; /* Init push vlan action list. */ snprintf(s, sizeof(s), "%s_push_vlan_action_list", sh->ibdev_name); - sh->push_vlan_action_list = mlx5_list_create(s, sh, + sh->push_vlan_action_list = mlx5_list_create(s, sh, true, flow_dv_push_vlan_create_cb, flow_dv_push_vlan_match_cb, flow_dv_push_vlan_remove_cb, @@ -294,7 +294,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; /* Init sample action list. */ snprintf(s, sizeof(s), "%s_sample_action_list", sh->ibdev_name); - sh->sample_action_list = mlx5_list_create(s, sh, + sh->sample_action_list = mlx5_list_create(s, sh, true, flow_dv_sample_create_cb, flow_dv_sample_match_cb, flow_dv_sample_remove_cb, @@ -304,7 +304,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; /* Init dest array action list. */ snprintf(s, sizeof(s), "%s_dest_array_list", sh->ibdev_name); - sh->dest_array_list = mlx5_list_create(s, sh, + sh->dest_array_list = mlx5_list_create(s, sh, true, flow_dv_dest_array_create_cb, flow_dv_dest_array_match_cb, flow_dv_dest_array_remove_cb, @@ -1759,7 +1759,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOTSUP; goto error; } - priv->hrxqs = mlx5_list_create("hrxq", eth_dev, mlx5_hrxq_create_cb, + priv->hrxqs = mlx5_list_create("hrxq", eth_dev, true, + mlx5_hrxq_create_cb, mlx5_hrxq_match_cb, mlx5_hrxq_remove_cb, mlx5_hrxq_clone_cb, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 5a536e3dff..4a45172a12 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -10054,7 +10054,7 @@ flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx) MKSTR(matcher_name, "%s_%s_%u_%u_matcher_list", key.is_fdb ? "FDB" : "NIC", key.is_egress ? "egress" : "ingress", key.level, key.id); - tbl_data->matchers = mlx5_list_create(matcher_name, sh, + tbl_data->matchers = mlx5_list_create(matcher_name, sh, true, flow_dv_matcher_create_cb, flow_dv_matcher_match_cb, flow_dv_matcher_remove_cb, diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index e6176e70d2..a04f93e1d4 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -610,7 +610,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOTSUP; goto error; } - priv->hrxqs = mlx5_list_create("hrxq", eth_dev, + priv->hrxqs = mlx5_list_create("hrxq", eth_dev, true, mlx5_hrxq_create_cb, mlx5_hrxq_match_cb, mlx5_hrxq_remove_cb, mlx5_hrxq_clone_cb, mlx5_hrxq_clone_free_cb); -- 2.25.1