From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EFEA4A0C4B; Mon, 12 Jul 2021 03:49:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 58482411CF; Mon, 12 Jul 2021 03:48:10 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2068.outbound.protection.outlook.com [40.107.243.68]) by mails.dpdk.org (Postfix) with ESMTP id 70FBC411DC for ; Mon, 12 Jul 2021 03:48:06 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gGfZVAe86I+A3CqZqN8b064U3sCigkCbMZysyIkAiW8zAX8R36YPjWOv+Wo7UWLlUIjP6ur5jqPpYp9N+AfRw8dZL3V39IfTpInr5MM+8aFKGlnHr4yD0j6tD25s1HCRfLvsO+yHbfmYIB+3ds97qFr6E1CkvBNDkAKbumOVtOskmyYLTRIboczZ8CSyEWTRDWwLZL3Y8CH7aJVUIbTj6fm3oCuIiJ1jXWWxlhXGMcn/Q3Bgto6GLh6i1sqQkQ3q1sNLKvgPNX31sGokHigEOpDg7DyCtxUZpi2ZyD+iM2og/Oj4lgYh8miMGXdgi5i76Qu961LMoy3KDvLjyeXELQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=atoAlqCMrGlo4cfTtJQWLhD/WGvwLegalR9GcfBzgTA=; b=XWYHrKWDJulfzoXtQv3gwvfuOmfQ2913wMz/4DRMutCB2OyfyZxodh8rVHxP6/BS14sN7KLTWYYTfIWpsjNdS4AEMxXplFNi63BHGHTCGaiNjfLD6XVu5qPRmwsCvcClN8HwP6BdYJZ5++lz9HCqw7TkTP63GMNjOtRZw9BXwqq706SARQQmwqEYBA9qDqEzyo6/jiIGb4Kvk5Kjo8l8cqjwuJoKFczISEB80Kmdk1zXWOek+MUbTp0zY48+Lw0JpsPI6WyJSBynw/4z5APALqRo2hhjUxEJ7/yRInasxZmmNo1XdErhS6L2Sh4yUtntkSTmYYNvppKxna+vGgVzyQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=atoAlqCMrGlo4cfTtJQWLhD/WGvwLegalR9GcfBzgTA=; b=M7chxf0zbE6IFaQAQXWWqGYXcWEg02ulsRBpjWQ3ihy0Gb6Ra29GowKFRov8OF0cthYNdAVPUPdGnzsZsXzzDyJvk3XYPssrK5K4lOpNIyY0T0bF2FPwgSuG3MnBipY6WaWu9Rt+bUc1EAA2hlmxCesTpvp6/NyMyhvHMC4Cx5GpcqGxp00dqBKeYzIRDEiTVu7uA9bddN2pEGLKOtTa2Gcs3aiLxqYhdPpuDh5AmNTf0uX9f5gQSvA9w0uR/Q59BscgmR7bMdSu02L+03tMqr1M6yKfuHHhnhQdr+qbohHBhvHaA0y0DTYbWr0ufCGYsnz/xjxhBde1iYPd0yMv1g== Received: from BN8PR04CA0056.namprd04.prod.outlook.com (2603:10b6:408:d4::30) by BL0PR12MB4867.namprd12.prod.outlook.com (2603:10b6:208:17e::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.23; Mon, 12 Jul 2021 01:48:05 +0000 Received: from BN8NAM11FT024.eop-nam11.prod.protection.outlook.com (2603:10b6:408:d4:cafe::ec) by BN8PR04CA0056.outlook.office365.com (2603:10b6:408:d4::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.20 via Frontend Transport; Mon, 12 Jul 2021 01:48:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT024.mail.protection.outlook.com (10.13.177.38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4308.20 via Frontend Transport; Mon, 12 Jul 2021 01:48:05 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 12 Jul 2021 01:47:43 +0000 From: Suanming Mou To: , CC: , , Date: Mon, 12 Jul 2021 04:46:47 +0300 Message-ID: <20210712014654.32428-20-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210712014654.32428-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210712014654.32428-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b6cc84cc-be07-4306-d20f-08d944d717a1 X-MS-TrafficTypeDiagnostic: BL0PR12MB4867: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:51; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4/hMNT9WQSCkx+YG9T5w9dTbj8XlefLTXUjWXnyAvKaKdgVzdSttgVNYAGH7xGVwOHK5zz2028eYoSkdTekAh13yk7IlcvYxytsWln1wDzTkc0tidsW5jpxvBCLRl4wTUm02paz94Yxz8snyeJBLhdUxhj35/622IzFpHblEjauIakZ3C3WlOX1dSmSWmR5Z/Ofxy4LBK+VcuqzvrHORiycE8lAmA0eEgwucGuGhPkpekUOh5QtlMOi58No/lbrC1iVJgAxeWDsYb85omzmX0MqK0uHOEJU2NZLFiHzRjsv58fS2QjmS8tUA8UUIBmI+tc8aep9k8B+COhPecZvu18leU7W+HAJWdtvX5D+MJK9HY8kyzMoCK7ckZ2YLw9S9ivczKYYA4lHEGkrSCXLf/Sckx+xh+vbC8WWheLRaamF7FhlIMBmwcA1KcYN9PWzkSj4qnxfGbmEmgaKHDIGNBzWLPIbZLaDkrtp/HasDzbEiMlmz9uVhzOjwI+7B+u2Pd/DqA9xbLUPmHr2hB4+1g70kHrNkYFi+gr+AXHhP+Cs3qXCWIUaY5xgPjtaI2FDafW9sdeBsX0AojNKHKYmZbqymp7FgETKO8Q/JJygWv1QN0Yum7XmFcvY7HEcW2OrNiBwa+I4+oK0uwei0sHRROt/lNnNIgndMCIVnNVZS/Y2PXRijRYKphcZ1FTS5ivm8BUJFzYNc5W3j8vTotS9KvfEH+xgcD49mMFLki+25Xio= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(136003)(376002)(39860400002)(346002)(46966006)(36840700001)(16526019)(336012)(36860700001)(5660300002)(54906003)(110136005)(82740400003)(34020700004)(186003)(426003)(36906005)(70206006)(55016002)(70586007)(26005)(316002)(6636002)(47076005)(8676002)(2906002)(6286002)(86362001)(83380400001)(8936002)(356005)(7636003)(82310400003)(478600001)(4326008)(36756003)(1076003)(7696005)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jul 2021 01:48:05.2515 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b6cc84cc-be07-4306-d20f-08d944d717a1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT024.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB4867 Subject: [dpdk-dev] [PATCH v5 19/26] common/mlx5: support list non-lcore operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit supports the list non-lcore operations with an extra sub-list and lock. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common_utils.c | 92 +++++++++++++++++-------- drivers/common/mlx5/mlx5_common_utils.h | 9 ++- 2 files changed, 71 insertions(+), 30 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_utils.c b/drivers/common/mlx5/mlx5_common_utils.c index 858c8d8164..c8b48f3afc 100644 --- a/drivers/common/mlx5/mlx5_common_utils.c +++ b/drivers/common/mlx5/mlx5_common_utils.c @@ -20,8 +20,8 @@ mlx5_list_init(struct mlx5_list_inconst *l_inconst, { rte_rwlock_init(&l_inconst->lock); if (l_const->lcores_share) { - l_inconst->cache[RTE_MAX_LCORE] = gc; - LIST_INIT(&l_inconst->cache[RTE_MAX_LCORE]->h); + l_inconst->cache[MLX5_LIST_GLOBAL] = gc; + LIST_INIT(&l_inconst->cache[MLX5_LIST_GLOBAL]->h); } DRV_LOG(DEBUG, "mlx5 list %s initialized.", l_const->name); return 0; @@ -59,6 +59,7 @@ mlx5_list_create(const char *name, void *ctx, bool lcores_share, list->l_const.cb_remove = cb_remove; list->l_const.cb_clone = cb_clone; list->l_const.cb_clone_free = cb_clone_free; + rte_spinlock_init(&list->l_const.lcore_lock); if (lcores_share) gc = (struct mlx5_list_cache *)(list + 1); if (mlx5_list_init(&list->l_inconst, &list->l_const, gc) != 0) { @@ -85,11 +86,11 @@ __list_lookup(struct mlx5_list_inconst *l_inconst, DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", l_const->name, (void *)entry, entry->ref_cnt); - } else if (lcore_index < RTE_MAX_LCORE) { + } else if (lcore_index < MLX5_LIST_GLOBAL) { ret = __atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED); } - if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) + if (likely(ret != 0 || lcore_index == MLX5_LIST_GLOBAL)) return entry; if (reuse && ret == 0) entry->ref_cnt--; /* Invalid entry. */ @@ -107,10 +108,11 @@ _mlx5_list_lookup(struct mlx5_list_inconst *l_inconst, int i; rte_rwlock_read_lock(&l_inconst->lock); - for (i = 0; i < RTE_MAX_LCORE; i++) { + for (i = 0; i < MLX5_LIST_GLOBAL; i++) { if (!l_inconst->cache[i]) continue; - entry = __list_lookup(l_inconst, l_const, i, ctx, false); + entry = __list_lookup(l_inconst, l_const, i, + ctx, false); if (entry) break; } @@ -170,18 +172,11 @@ __list_cache_clean(struct mlx5_list_inconst *l_inconst, static inline struct mlx5_list_entry * _mlx5_list_register(struct mlx5_list_inconst *l_inconst, struct mlx5_list_const *l_const, - void *ctx) + void *ctx, int lcore_index) { struct mlx5_list_entry *entry, *local_entry; volatile uint32_t prev_gen_cnt = 0; - int lcore_index = rte_lcore_index(rte_lcore_id()); - MLX5_ASSERT(l_inconst); - MLX5_ASSERT(lcore_index < RTE_MAX_LCORE); - if (unlikely(lcore_index == -1)) { - rte_errno = ENOTSUP; - return NULL; - } if (unlikely(!l_inconst->cache[lcore_index])) { l_inconst->cache[lcore_index] = mlx5_malloc(0, sizeof(struct mlx5_list_cache), @@ -202,7 +197,7 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, if (l_const->lcores_share) { /* 2. Lookup with read lock on global list, reuse if found. */ rte_rwlock_read_lock(&l_inconst->lock); - entry = __list_lookup(l_inconst, l_const, RTE_MAX_LCORE, + entry = __list_lookup(l_inconst, l_const, MLX5_LIST_GLOBAL, ctx, true); if (likely(entry)) { rte_rwlock_read_unlock(&l_inconst->lock); @@ -241,7 +236,7 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, if (unlikely(prev_gen_cnt != l_inconst->gen_cnt)) { struct mlx5_list_entry *oentry = __list_lookup(l_inconst, l_const, - RTE_MAX_LCORE, + MLX5_LIST_GLOBAL, ctx, true); if (unlikely(oentry)) { @@ -255,7 +250,7 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, } } /* 5. Update lists. */ - LIST_INSERT_HEAD(&l_inconst->cache[RTE_MAX_LCORE]->h, entry, next); + LIST_INSERT_HEAD(&l_inconst->cache[MLX5_LIST_GLOBAL]->h, entry, next); l_inconst->gen_cnt++; rte_rwlock_write_unlock(&l_inconst->lock); LIST_INSERT_HEAD(&l_inconst->cache[lcore_index]->h, local_entry, next); @@ -268,21 +263,30 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, struct mlx5_list_entry * mlx5_list_register(struct mlx5_list *list, void *ctx) { - return _mlx5_list_register(&list->l_inconst, &list->l_const, ctx); + struct mlx5_list_entry *entry; + int lcore_index = rte_lcore_index(rte_lcore_id()); + + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&list->l_const.lcore_lock); + } + entry = _mlx5_list_register(&list->l_inconst, &list->l_const, ctx, + lcore_index); + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&list->l_const.lcore_lock); + return entry; } static inline int _mlx5_list_unregister(struct mlx5_list_inconst *l_inconst, struct mlx5_list_const *l_const, - struct mlx5_list_entry *entry) + struct mlx5_list_entry *entry, + int lcore_idx) { struct mlx5_list_entry *gentry = entry->gentry; - int lcore_idx; if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) return 1; - lcore_idx = rte_lcore_index(rte_lcore_id()); - MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); if (entry->lcore_idx == (uint32_t)lcore_idx) { LIST_REMOVE(entry, next); if (l_const->lcores_share) @@ -321,7 +325,19 @@ int mlx5_list_unregister(struct mlx5_list *list, struct mlx5_list_entry *entry) { - return _mlx5_list_unregister(&list->l_inconst, &list->l_const, entry); + int ret; + int lcore_index = rte_lcore_index(rte_lcore_id()); + + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&list->l_const.lcore_lock); + } + ret = _mlx5_list_unregister(&list->l_inconst, &list->l_const, entry, + lcore_index); + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&list->l_const.lcore_lock); + return ret; + } static void @@ -332,13 +348,13 @@ mlx5_list_uninit(struct mlx5_list_inconst *l_inconst, int i; MLX5_ASSERT(l_inconst); - for (i = 0; i <= RTE_MAX_LCORE; i++) { + for (i = 0; i < MLX5_LIST_MAX; i++) { if (!l_inconst->cache[i]) continue; while (!LIST_EMPTY(&l_inconst->cache[i]->h)) { entry = LIST_FIRST(&l_inconst->cache[i]->h); LIST_REMOVE(entry, next); - if (i == RTE_MAX_LCORE) { + if (i == MLX5_LIST_GLOBAL) { l_const->cb_remove(l_const->ctx, entry); DRV_LOG(DEBUG, "mlx5 list %s entry %p " "destroyed.", l_const->name, @@ -347,7 +363,7 @@ mlx5_list_uninit(struct mlx5_list_inconst *l_inconst, l_const->cb_clone_free(l_const->ctx, entry); } } - if (i != RTE_MAX_LCORE) + if (i != MLX5_LIST_GLOBAL) mlx5_free(l_inconst->cache[i]); } } @@ -416,6 +432,7 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, h->l_const.cb_remove = cb_remove; h->l_const.cb_clone = cb_clone; h->l_const.cb_clone_free = cb_clone_free; + rte_spinlock_init(&h->l_const.lcore_lock); h->mask = act_size - 1; h->direct_key = direct_key; gc = (struct mlx5_list_cache *)&h->buckets[act_size]; @@ -449,28 +466,45 @@ mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx) { uint32_t idx; struct mlx5_list_entry *entry; + int lcore_index = rte_lcore_index(rte_lcore_id()); if (h->direct_key) idx = (uint32_t)(key & h->mask); else idx = rte_hash_crc_8byte(key, 0) & h->mask; - entry = _mlx5_list_register(&h->buckets[idx].l, &h->l_const, ctx); + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&h->l_const.lcore_lock); + } + entry = _mlx5_list_register(&h->buckets[idx].l, &h->l_const, ctx, + lcore_index); if (likely(entry)) { if (h->l_const.lcores_share) entry->gentry->bucket_idx = idx; else entry->bucket_idx = idx; } + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&h->l_const.lcore_lock); return entry; } int mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_list_entry *entry) { + int lcore_index = rte_lcore_index(rte_lcore_id()); + int ret; uint32_t idx = h->l_const.lcores_share ? entry->gentry->bucket_idx : entry->bucket_idx; - - return _mlx5_list_unregister(&h->buckets[idx].l, &h->l_const, entry); + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&h->l_const.lcore_lock); + } + ret = _mlx5_list_unregister(&h->buckets[idx].l, &h->l_const, entry, + lcore_index); + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&h->l_const.lcore_lock); + return ret; } void diff --git a/drivers/common/mlx5/mlx5_common_utils.h b/drivers/common/mlx5/mlx5_common_utils.h index 5718a21be0..613d29de0c 100644 --- a/drivers/common/mlx5/mlx5_common_utils.h +++ b/drivers/common/mlx5/mlx5_common_utils.h @@ -11,6 +11,12 @@ /** Maximum size of string for naming. */ #define MLX5_NAME_SIZE 32 +/** Maximum size of list. */ +#define MLX5_LIST_MAX (RTE_MAX_LCORE + 2) +/** Global list index. */ +#define MLX5_LIST_GLOBAL ((MLX5_LIST_MAX) - 1) +/** None rte core list index. */ +#define MLX5_LIST_NLCORE ((MLX5_LIST_MAX) - 2) struct mlx5_list; @@ -87,6 +93,7 @@ struct mlx5_list_const { char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ void *ctx; /* user objects target to callback. */ bool lcores_share; /* Whether to share objects between the lcores. */ + rte_spinlock_t lcore_lock; /* Lock for non-lcore list. */ mlx5_list_create_cb cb_create; /**< entry create callback. */ mlx5_list_match_cb cb_match; /**< entry match callback. */ mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ @@ -102,7 +109,7 @@ struct mlx5_list_inconst { rte_rwlock_t lock; /* read/write lock. */ volatile uint32_t gen_cnt; /* List modification may update it. */ volatile uint32_t count; /* number of entries in list. */ - struct mlx5_list_cache *cache[RTE_MAX_LCORE + 1]; + struct mlx5_list_cache *cache[MLX5_LIST_MAX]; /* Lcore cache, last index is the global cache. */ }; -- 2.25.1