From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3CE86A0A0C; Fri, 2 Jul 2021 08:21:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 415B4413C2; Fri, 2 Jul 2021 08:19:12 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2066.outbound.protection.outlook.com [40.107.93.66]) by mails.dpdk.org (Postfix) with ESMTP id 648CC413B6 for ; Fri, 2 Jul 2021 08:19:11 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dLdx0lKRgHx2c+yiygIRmtLgUCrfZ21Fs5ACquk6kDRipU5crL93zzQoigSp9qdli03Q21bgcq6/pKQ1jJa92WO3JqcEZsXf3iuFHBAJ4Ywr5JOBUhSEni82ye2J2Nl6ayvCg4hlwDCm9NttNbAXz6YzYMVHHvfvn9AVbsld2w6L64hoTSWFaGeNxVyfU1XppTuk2wWbQ5nPkqpO2m+btTSbiKqe+uMB7f30byTbKTV/kpCTdDjmg9fA5tuKf8Sv4esA0d/EMRPruD9W950SmAVdUnCs3tz+OTQm9bBpwYWeCckwKw8uweZyWwAhq9cxafhm1XmcHjuVHTo4BUfbSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+jhBS6ovPUw8H4IN7UspcaJS2RGj2DPTGyWN2S2Q1lA=; b=KiH5FC7cbtdrw3wj1RSTuLRv0KTPG2wdyRX3+jwBtsO4CwA9kTTthBqI0epYsgpzlMCIapAHcUBUqaJo/Tbmou8/KvNbULukh4nlKVDbOkqb9pFNgq0xFXXTgZf/n+NQ1t9ecx1NzsfW3+IXS6ErB4bCs4lCbiBwazYWF2JwgmTm8vqj2yz4tXd1BgRls8m17BQ55RG2TG6kj40xvX/GkHhmy4668ldZWbo1s7Il9nWSD/pOr9RRuLGDwz/kibsjeuMfjKseeaFlWuXYEmvoZXK1fEpaV++HpMKmjHL49K7zeIXFmAhAYnBDposAnsQ6BITr38w0M8VWJTtNjihcIQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+jhBS6ovPUw8H4IN7UspcaJS2RGj2DPTGyWN2S2Q1lA=; b=AF90HV6C1b8IMWD65NqmGKwxErHvPyyQaRpPz18DrbrPuX8YNXJLZ6/kXi9d5KgFrhgTE/oHIud6mzQLU4OIfBlwJZSwa9TuoIwale8A3HA6jYVrB1y/abvKs1m9CrIFU9pyxpDEJ5V1nA5kPF0tSwQ3/s3y86jiXOWQ8x1dX6xudp7lhhpXvaU6rmKCrBkinbBbvh/4YC7h92OEURsi1XPt78Z91T2pIqnRaBua8GbveFT4OWGfsZkuDkuwwo+Dx9vIlO5Mogg0vGdt7Ei/JuQzO4sFZKzcWd192Oww9vZQlTgFgBhcgy9jPN+0jdT/0eWPykmef4IwGUjDze3QIQ== Received: from BN0PR04CA0180.namprd04.prod.outlook.com (2603:10b6:408:eb::35) by MN2PR12MB4437.namprd12.prod.outlook.com (2603:10b6:208:26f::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Fri, 2 Jul 2021 06:19:10 +0000 Received: from BN8NAM11FT059.eop-nam11.prod.protection.outlook.com (2603:10b6:408:eb:cafe::f9) by BN0PR04CA0180.outlook.office365.com (2603:10b6:408:eb::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.21 via Frontend Transport; Fri, 2 Jul 2021 06:19:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT059.mail.protection.outlook.com (10.13.177.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:09 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:19:06 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:15 +0300 Message-ID: <20210702061816.10454-22-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1e82aae8-0e34-482a-292a-08d93d214de0 X-MS-TrafficTypeDiagnostic: MN2PR12MB4437: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:51; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: nLOdomcs6zVpNXVcm1YMhAAyNxEAUFbUrbSxCmlQ9kAuBj5yzEWYlgm+E4EmDpzmCoAafq5WLNCvvGEAF/HmC6gMAHFGPFw0hd/oWxfq3gy31YYdMN1nyN3mVFHT/ycujAW00TaqzovW3XHo/hRiKJjZPn99TKbvn9kVLzhvYFS7A6vY+9StdWRJezpW2qy0tm46rQI2uD6XnX81gmin/gyoBsqekRbFKDmKmPQepEzOAk8RXtm+aXSEMmvihTiYwkk0Mu0JoghGty2YXMDF4yFx9RhEETywUWm4KQv8AgrAica22yg0Oa0uwE5Dp9n5gycWJWM1Ea1wQgWMvXyll1y0n+goojvgyISbPQ3cK5MCz9Y/QBEV1efW8LRh+a60gJQcevySRE7OlkZA6HwuzQdBxzn4XpDfrWh6nxIqQUHmkhfeEO8mUDAzZW6hjFYd7vYvcptvVRRzo0J/5X5p5Us5+tH0YU1vcahXLANpK5RU1JuNEgAmQFlPVspXmOIlFdT7/tUEgeCG/k2IxhEESHI8Km9URP452D0sUqWloZIueziYV+qh+zOzJyW93YOdZK7prW/920LLT0QwjnDdqnupQQiEKRFjbKXEQGhA1NLhyqDkxsD1Uy9EzmsBfRsjb4WTp4a2PzCLiPBePs+63NevPMcNF8NIovZOFqgHT1Sav3OPGmm0w4sbvpVg4ltFi/5DzI7TUrSlUAbaqvn9sw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(396003)(376002)(136003)(39860400002)(36840700001)(46966006)(6666004)(54906003)(4326008)(8936002)(2906002)(36906005)(6286002)(478600001)(6636002)(8676002)(2616005)(426003)(1076003)(110136005)(336012)(316002)(70206006)(26005)(5660300002)(36756003)(356005)(7636003)(82740400003)(86362001)(70586007)(82310400003)(55016002)(16526019)(36860700001)(7696005)(186003)(47076005)(83380400001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:19:09.7428 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1e82aae8-0e34-482a-292a-08d93d214de0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT059.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4437 Subject: [dpdk-dev] [PATCH v3 21/22] net/mlx5: support list none local core operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit supports the list none local core operations with an extra sub-list. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common_utils.c | 92 +++++++++++++++++-------- drivers/common/mlx5/mlx5_common_utils.h | 9 ++- 2 files changed, 71 insertions(+), 30 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_utils.c b/drivers/common/mlx5/mlx5_common_utils.c index 858c8d8164..d58d0d08ab 100644 --- a/drivers/common/mlx5/mlx5_common_utils.c +++ b/drivers/common/mlx5/mlx5_common_utils.c @@ -20,8 +20,8 @@ mlx5_list_init(struct mlx5_list_inconst *l_inconst, { rte_rwlock_init(&l_inconst->lock); if (l_const->lcores_share) { - l_inconst->cache[RTE_MAX_LCORE] = gc; - LIST_INIT(&l_inconst->cache[RTE_MAX_LCORE]->h); + l_inconst->cache[MLX5_LIST_GLOBAL] = gc; + LIST_INIT(&l_inconst->cache[MLX5_LIST_GLOBAL]->h); } DRV_LOG(DEBUG, "mlx5 list %s initialized.", l_const->name); return 0; @@ -59,6 +59,7 @@ mlx5_list_create(const char *name, void *ctx, bool lcores_share, list->l_const.cb_remove = cb_remove; list->l_const.cb_clone = cb_clone; list->l_const.cb_clone_free = cb_clone_free; + rte_spinlock_init(&list->l_const.nlcore_lock); if (lcores_share) gc = (struct mlx5_list_cache *)(list + 1); if (mlx5_list_init(&list->l_inconst, &list->l_const, gc) != 0) { @@ -85,11 +86,11 @@ __list_lookup(struct mlx5_list_inconst *l_inconst, DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", l_const->name, (void *)entry, entry->ref_cnt); - } else if (lcore_index < RTE_MAX_LCORE) { + } else if (lcore_index < MLX5_LIST_GLOBAL) { ret = __atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED); } - if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) + if (likely(ret != 0 || lcore_index == MLX5_LIST_GLOBAL)) return entry; if (reuse && ret == 0) entry->ref_cnt--; /* Invalid entry. */ @@ -107,10 +108,11 @@ _mlx5_list_lookup(struct mlx5_list_inconst *l_inconst, int i; rte_rwlock_read_lock(&l_inconst->lock); - for (i = 0; i < RTE_MAX_LCORE; i++) { + for (i = 0; i < MLX5_LIST_GLOBAL; i++) { if (!l_inconst->cache[i]) continue; - entry = __list_lookup(l_inconst, l_const, i, ctx, false); + entry = __list_lookup(l_inconst, l_const, i, + ctx, false); if (entry) break; } @@ -170,18 +172,11 @@ __list_cache_clean(struct mlx5_list_inconst *l_inconst, static inline struct mlx5_list_entry * _mlx5_list_register(struct mlx5_list_inconst *l_inconst, struct mlx5_list_const *l_const, - void *ctx) + void *ctx, int lcore_index) { struct mlx5_list_entry *entry, *local_entry; volatile uint32_t prev_gen_cnt = 0; - int lcore_index = rte_lcore_index(rte_lcore_id()); - MLX5_ASSERT(l_inconst); - MLX5_ASSERT(lcore_index < RTE_MAX_LCORE); - if (unlikely(lcore_index == -1)) { - rte_errno = ENOTSUP; - return NULL; - } if (unlikely(!l_inconst->cache[lcore_index])) { l_inconst->cache[lcore_index] = mlx5_malloc(0, sizeof(struct mlx5_list_cache), @@ -202,7 +197,7 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, if (l_const->lcores_share) { /* 2. Lookup with read lock on global list, reuse if found. */ rte_rwlock_read_lock(&l_inconst->lock); - entry = __list_lookup(l_inconst, l_const, RTE_MAX_LCORE, + entry = __list_lookup(l_inconst, l_const, MLX5_LIST_GLOBAL, ctx, true); if (likely(entry)) { rte_rwlock_read_unlock(&l_inconst->lock); @@ -241,7 +236,7 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, if (unlikely(prev_gen_cnt != l_inconst->gen_cnt)) { struct mlx5_list_entry *oentry = __list_lookup(l_inconst, l_const, - RTE_MAX_LCORE, + MLX5_LIST_GLOBAL, ctx, true); if (unlikely(oentry)) { @@ -255,7 +250,7 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, } } /* 5. Update lists. */ - LIST_INSERT_HEAD(&l_inconst->cache[RTE_MAX_LCORE]->h, entry, next); + LIST_INSERT_HEAD(&l_inconst->cache[MLX5_LIST_GLOBAL]->h, entry, next); l_inconst->gen_cnt++; rte_rwlock_write_unlock(&l_inconst->lock); LIST_INSERT_HEAD(&l_inconst->cache[lcore_index]->h, local_entry, next); @@ -268,21 +263,30 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, struct mlx5_list_entry * mlx5_list_register(struct mlx5_list *list, void *ctx) { - return _mlx5_list_register(&list->l_inconst, &list->l_const, ctx); + struct mlx5_list_entry *entry; + int lcore_index = rte_lcore_index(rte_lcore_id()); + + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&list->l_const.nlcore_lock); + } + entry = _mlx5_list_register(&list->l_inconst, &list->l_const, ctx, + lcore_index); + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&list->l_const.nlcore_lock); + return entry; } static inline int _mlx5_list_unregister(struct mlx5_list_inconst *l_inconst, struct mlx5_list_const *l_const, - struct mlx5_list_entry *entry) + struct mlx5_list_entry *entry, + int lcore_idx) { struct mlx5_list_entry *gentry = entry->gentry; - int lcore_idx; if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) return 1; - lcore_idx = rte_lcore_index(rte_lcore_id()); - MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); if (entry->lcore_idx == (uint32_t)lcore_idx) { LIST_REMOVE(entry, next); if (l_const->lcores_share) @@ -321,7 +325,19 @@ int mlx5_list_unregister(struct mlx5_list *list, struct mlx5_list_entry *entry) { - return _mlx5_list_unregister(&list->l_inconst, &list->l_const, entry); + int ret; + int lcore_index = rte_lcore_index(rte_lcore_id()); + + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&list->l_const.nlcore_lock); + } + ret = _mlx5_list_unregister(&list->l_inconst, &list->l_const, entry, + lcore_index); + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&list->l_const.nlcore_lock); + return ret; + } static void @@ -332,13 +348,13 @@ mlx5_list_uninit(struct mlx5_list_inconst *l_inconst, int i; MLX5_ASSERT(l_inconst); - for (i = 0; i <= RTE_MAX_LCORE; i++) { + for (i = 0; i < MLX5_LIST_MAX; i++) { if (!l_inconst->cache[i]) continue; while (!LIST_EMPTY(&l_inconst->cache[i]->h)) { entry = LIST_FIRST(&l_inconst->cache[i]->h); LIST_REMOVE(entry, next); - if (i == RTE_MAX_LCORE) { + if (i == MLX5_LIST_GLOBAL) { l_const->cb_remove(l_const->ctx, entry); DRV_LOG(DEBUG, "mlx5 list %s entry %p " "destroyed.", l_const->name, @@ -347,7 +363,7 @@ mlx5_list_uninit(struct mlx5_list_inconst *l_inconst, l_const->cb_clone_free(l_const->ctx, entry); } } - if (i != RTE_MAX_LCORE) + if (i != MLX5_LIST_GLOBAL) mlx5_free(l_inconst->cache[i]); } } @@ -416,6 +432,7 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, h->l_const.cb_remove = cb_remove; h->l_const.cb_clone = cb_clone; h->l_const.cb_clone_free = cb_clone_free; + rte_spinlock_init(&h->l_const.nlcore_lock); h->mask = act_size - 1; h->direct_key = direct_key; gc = (struct mlx5_list_cache *)&h->buckets[act_size]; @@ -449,28 +466,45 @@ mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx) { uint32_t idx; struct mlx5_list_entry *entry; + int lcore_index = rte_lcore_index(rte_lcore_id()); if (h->direct_key) idx = (uint32_t)(key & h->mask); else idx = rte_hash_crc_8byte(key, 0) & h->mask; - entry = _mlx5_list_register(&h->buckets[idx].l, &h->l_const, ctx); + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&h->l_const.nlcore_lock); + } + entry = _mlx5_list_register(&h->buckets[idx].l, &h->l_const, ctx, + lcore_index); if (likely(entry)) { if (h->l_const.lcores_share) entry->gentry->bucket_idx = idx; else entry->bucket_idx = idx; } + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&h->l_const.nlcore_lock); return entry; } int mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_list_entry *entry) { + int lcore_index = rte_lcore_index(rte_lcore_id()); + int ret; uint32_t idx = h->l_const.lcores_share ? entry->gentry->bucket_idx : entry->bucket_idx; - - return _mlx5_list_unregister(&h->buckets[idx].l, &h->l_const, entry); + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&h->l_const.nlcore_lock); + } + ret = _mlx5_list_unregister(&h->buckets[idx].l, &h->l_const, entry, + lcore_index); + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&h->l_const.nlcore_lock); + return ret; } void diff --git a/drivers/common/mlx5/mlx5_common_utils.h b/drivers/common/mlx5/mlx5_common_utils.h index 9e8ebe772a..95106afc5e 100644 --- a/drivers/common/mlx5/mlx5_common_utils.h +++ b/drivers/common/mlx5/mlx5_common_utils.h @@ -11,6 +11,12 @@ /** Maximum size of string for naming. */ #define MLX5_NAME_SIZE 32 +/** Maximum size of list. */ +#define MLX5_LIST_MAX (RTE_MAX_LCORE + 2) +/** Global list index. */ +#define MLX5_LIST_GLOBAL ((MLX5_LIST_MAX) - 1) +/** None rte core list index. */ +#define MLX5_LIST_NLCORE ((MLX5_LIST_MAX) - 2) struct mlx5_list; @@ -87,6 +93,7 @@ struct mlx5_list_const { char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ void *ctx; /* user objects target to callback. */ bool lcores_share; /* Whether to share objects between the lcores. */ + rte_spinlock_t nlcore_lock; /* Lock for non-lcore list. */ mlx5_list_create_cb cb_create; /**< entry create callback. */ mlx5_list_match_cb cb_match; /**< entry match callback. */ mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ @@ -102,7 +109,7 @@ struct mlx5_list_inconst { rte_rwlock_t lock; /* read/write lock. */ volatile uint32_t gen_cnt; /* List modification may update it. */ volatile uint32_t count; /* number of entries in list. */ - struct mlx5_list_cache *cache[RTE_MAX_LCORE + 1]; + struct mlx5_list_cache *cache[MLX5_LIST_MAX]; /* Lcore cache, last index is the global cache. */ }; -- 2.25.1