From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2E20AA0A0C; Fri, 2 Jul 2021 08:20:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8B25C413AF; Fri, 2 Jul 2021 08:19:07 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2040.outbound.protection.outlook.com [40.107.220.40]) by mails.dpdk.org (Postfix) with ESMTP id C46EB41332 for ; Fri, 2 Jul 2021 08:19:04 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nAFyRd/HgiTbQgOnKdMipLT8dv7TLp+8ZjkEny1j+HANwetvxFX2eTJ6/zVg+sWyIDK00GmEnnJH7RYhoP69ZGxCFKOjz8kUOKJW2GZp+h7f+Rtx2HaWhjBC0CE7cy/BTGFlX2a9sE0naMPVj/S3xsl5kyG3FecyoSybAdDiznDRJQb6iFmeHdAreR0zXAVobrNtCSpeqoF/DoNaZ4j26UeO5btA9iihJ0Ab4B8z4472D3h6zxECjjbZPdqI6IXId3im+tOvoqwjInRvUxShpn5/6LFniK42dn+aJraNDZF53M+0KWgmUEG+sFDD47FUNWsnUSMjSHolRCmOcw0d5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yWnD4MrB3zTRuCdpV2/FQWTqIpS75n9wQbEiVbKrvfQ=; b=oFOnZ73yHTQdhwA8SU7h/223d2HsZYOj6Q6nMT8x5CFy1oH87npUBQPJkoNannThqioDyo+WYr2TFxR/2sT1tt3WDKYej+p84yTP2IM/1EXHuCUrBUYoT5aKSjd7ytI2ZHItRDYLXPy61OuMQTx9/tnxhTu6giOpCiXPiaxyjtfdYQ6oh0iNRF9n904kCdzszfQFxCMaiAzEXJhZSsitr5gj9BvypXRxZYdJtXlU0VnCIg4mNUifjHn+iKDmr0xfIfHQLWYD1sq6QdzwK0Xdnz3eVhQWZyZdXHq9SD3I9yW1yI3Bq4SiyuVMxoSWb6tvtrl7FM1YULpcPPoW8s12fw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yWnD4MrB3zTRuCdpV2/FQWTqIpS75n9wQbEiVbKrvfQ=; b=ZMyL7YoQ8K9rAqPOIAhAyb6k/HESD4gwkqx/UkrvMu/skqmcLpj6MUFMwDPcdL/9gSjgno/NImGza3n2p+DvURwfix2m/lDPAs9/agP4n+BI1l4zSgiDv8hSPAE6ogqBl68/Cyvmamksgan92fYs0YCdS11InkCpTRL3oldV3Q131FrwcEMNzDQkDlfAtoy49rPf0M+iRqphR83b4POKd8ehVYzTKFMBFDtKlmAdUEw5oUcZnAOT2MfpQPKjP8qk7vSnh+RYxOtD8+WxQNRSTxs1PX3L9ZL6s6W306OEloSncTW9lqTb7njcsx9f4tzPvkWpYGPpjA3lwT/pfDj2Sg== Received: from BN0PR04CA0172.namprd04.prod.outlook.com (2603:10b6:408:eb::27) by BL0PR12MB2436.namprd12.prod.outlook.com (2603:10b6:207:42::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Fri, 2 Jul 2021 06:19:03 +0000 Received: from BN8NAM11FT059.eop-nam11.prod.protection.outlook.com (2603:10b6:408:eb:cafe::59) by BN0PR04CA0172.outlook.office365.com (2603:10b6:408:eb::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT059.mail.protection.outlook.com (10.13.177.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:02 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:19:00 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:12 +0300 Message-ID: <20210702061816.10454-19-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3ff3aab8-7ea7-41e9-e585-08d93d2149bf X-MS-TrafficTypeDiagnostic: BL0PR12MB2436: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:46; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 6OuleW5eQhS+M5CBVdKUV+VR+j37FAz9gbA5H9r5hysPUrEGW5olyUhnh2ryVXoIE+VIYCyvaxe1Q5HiuOt8AA01pF0+cERhK3pxHqFeBTqoDkyhMRaU6tBk4S+Rkn9E4K/HbnAa5auiOOdO7JmIspJaY+zHjTlagY5KqLg4204HDJT3UC84Zz1zvELBCGw7xD1OlxDs7N0VPqBk9PGGTzpICLQ6bxKC8twyBE91Hbn5hK2lHVzF08bBpzVEwaCYnFHeOMFjwiGY+rkyQ8WKwCgODoFtfk9UnGmCkoHJ8lZc5jvDljgDoLfrtsKK4+xLzGhW+mf+YrYCixi+9GKOaijE/yUp3EpXav3G0iBcfymuyynbfZFfVKcYsdPDDA1/Bs5z167cBOLn4PSO3hH4Eqf1pQ1aq16QFTM0SbATu+K2LEkj/c7q0gHymh1waMdAcoVi1zwqCgXD6I/rt/SYvSpMZd3a7Ibnku5opijAdEi9g6dFSLj8w9eQXXRXSU7qyl5JQT+1MmmEHCG/6ijfXbL3jf67tNFt9/7kV6TlpSP7mtctMZ+CdbYMdmxxXgpCYCV1+lsGLpu4ilmGnrI2Hw48RwL9FoV7U2dPZPHbDVOwX2ncUqKizVv/cv3pqSnIpQpVKP7pdSgySI+LrQChr2+KvQnkqsohLZ3KUTSjjDO+GaeE4yLj+MBwrxJ3mOr0l0TIHm4CsRVP7FUgYkd+ag== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(346002)(136003)(39860400002)(376002)(46966006)(36840700001)(30864003)(5660300002)(8676002)(336012)(47076005)(26005)(83380400001)(1076003)(36756003)(82310400003)(55016002)(82740400003)(70206006)(2906002)(36860700001)(6286002)(426003)(6636002)(2616005)(7636003)(6666004)(7696005)(110136005)(16526019)(186003)(36906005)(356005)(70586007)(54906003)(316002)(86362001)(8936002)(4326008)(478600001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:19:02.8354 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3ff3aab8-7ea7-41e9-e585-08d93d2149bf X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT059.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB2436 Subject: [dpdk-dev] [PATCH v3 18/22] common/mlx5: optimize cache list object memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently, hash list uses the cache list as bucket list. The list in the buckets have the same name, ctx and callbacks. This wastes the memory. This commit abstracts all the name, ctx and callback members in the list to a constant struct and others to the inconstant struct, uses the wrapper functions to satisfy both hash list and cache list can set the list constant and inconstant struct individually. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common_utils.c | 295 ++++++++++++++---------- drivers/common/mlx5/mlx5_common_utils.h | 45 ++-- 2 files changed, 201 insertions(+), 139 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_utils.c b/drivers/common/mlx5/mlx5_common_utils.c index f75b1cb0da..858c8d8164 100644 --- a/drivers/common/mlx5/mlx5_common_utils.c +++ b/drivers/common/mlx5/mlx5_common_utils.c @@ -14,34 +14,16 @@ /********************* mlx5 list ************************/ static int -mlx5_list_init(struct mlx5_list *list, const char *name, void *ctx, - bool lcores_share, struct mlx5_list_cache *gc, - mlx5_list_create_cb cb_create, - mlx5_list_match_cb cb_match, - mlx5_list_remove_cb cb_remove, - mlx5_list_clone_cb cb_clone, - mlx5_list_clone_free_cb cb_clone_free) +mlx5_list_init(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const, + struct mlx5_list_cache *gc) { - if (!cb_match || !cb_create || !cb_remove || !cb_clone || - !cb_clone_free) { - rte_errno = EINVAL; - return -EINVAL; + rte_rwlock_init(&l_inconst->lock); + if (l_const->lcores_share) { + l_inconst->cache[RTE_MAX_LCORE] = gc; + LIST_INIT(&l_inconst->cache[RTE_MAX_LCORE]->h); } - if (name) - snprintf(list->name, sizeof(list->name), "%s", name); - list->ctx = ctx; - list->lcores_share = lcores_share; - list->cb_create = cb_create; - list->cb_match = cb_match; - list->cb_remove = cb_remove; - list->cb_clone = cb_clone; - list->cb_clone_free = cb_clone_free; - rte_rwlock_init(&list->lock); - if (lcores_share) { - list->cache[RTE_MAX_LCORE] = gc; - LIST_INIT(&list->cache[RTE_MAX_LCORE]->h); - } - DRV_LOG(DEBUG, "mlx5 list %s initialized.", list->name); + DRV_LOG(DEBUG, "mlx5 list %s initialized.", l_const->name); return 0; } @@ -56,16 +38,30 @@ mlx5_list_create(const char *name, void *ctx, bool lcores_share, struct mlx5_list *list; struct mlx5_list_cache *gc = NULL; + if (!cb_match || !cb_create || !cb_remove || !cb_clone || + !cb_clone_free) { + rte_errno = EINVAL; + return NULL; + } list = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*list) + (lcores_share ? sizeof(*gc) : 0), 0, SOCKET_ID_ANY); + if (!list) return NULL; + if (name) + snprintf(list->l_const.name, + sizeof(list->l_const.name), "%s", name); + list->l_const.ctx = ctx; + list->l_const.lcores_share = lcores_share; + list->l_const.cb_create = cb_create; + list->l_const.cb_match = cb_match; + list->l_const.cb_remove = cb_remove; + list->l_const.cb_clone = cb_clone; + list->l_const.cb_clone_free = cb_clone_free; if (lcores_share) gc = (struct mlx5_list_cache *)(list + 1); - if (mlx5_list_init(list, name, ctx, lcores_share, gc, - cb_create, cb_match, cb_remove, cb_clone, - cb_clone_free) != 0) { + if (mlx5_list_init(&list->l_inconst, &list->l_const, gc) != 0) { mlx5_free(list); return NULL; } @@ -73,19 +69,21 @@ mlx5_list_create(const char *name, void *ctx, bool lcores_share, } static struct mlx5_list_entry * -__list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) +__list_lookup(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const, + int lcore_index, void *ctx, bool reuse) { struct mlx5_list_entry *entry = - LIST_FIRST(&list->cache[lcore_index]->h); + LIST_FIRST(&l_inconst->cache[lcore_index]->h); uint32_t ret; while (entry != NULL) { - if (list->cb_match(list->ctx, entry, ctx) == 0) { + if (l_const->cb_match(l_const->ctx, entry, ctx) == 0) { if (reuse) { ret = __atomic_add_fetch(&entry->ref_cnt, 1, __ATOMIC_RELAXED) - 1; DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", - list->name, (void *)entry, + l_const->name, (void *)entry, entry->ref_cnt); } else if (lcore_index < RTE_MAX_LCORE) { ret = __atomic_load_n(&entry->ref_cnt, @@ -101,41 +99,55 @@ __list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) return NULL; } -struct mlx5_list_entry * -mlx5_list_lookup(struct mlx5_list *list, void *ctx) +static inline struct mlx5_list_entry * +_mlx5_list_lookup(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const, void *ctx) { struct mlx5_list_entry *entry = NULL; int i; - rte_rwlock_read_lock(&list->lock); + rte_rwlock_read_lock(&l_inconst->lock); for (i = 0; i < RTE_MAX_LCORE; i++) { - entry = __list_lookup(list, i, ctx, false); + if (!l_inconst->cache[i]) + continue; + entry = __list_lookup(l_inconst, l_const, i, ctx, false); if (entry) break; } - rte_rwlock_read_unlock(&list->lock); + rte_rwlock_read_unlock(&l_inconst->lock); return entry; } +struct mlx5_list_entry * +mlx5_list_lookup(struct mlx5_list *list, void *ctx) +{ + return _mlx5_list_lookup(&list->l_inconst, &list->l_const, ctx); +} + + static struct mlx5_list_entry * -mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, +mlx5_list_cache_insert(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const, int lcore_index, struct mlx5_list_entry *gentry, void *ctx) { - struct mlx5_list_entry *lentry = list->cb_clone(list->ctx, gentry, ctx); + struct mlx5_list_entry *lentry = + l_const->cb_clone(l_const->ctx, gentry, ctx); if (unlikely(!lentry)) return NULL; lentry->ref_cnt = 1u; lentry->gentry = gentry; lentry->lcore_idx = (uint32_t)lcore_index; - LIST_INSERT_HEAD(&list->cache[lcore_index]->h, lentry, next); + LIST_INSERT_HEAD(&l_inconst->cache[lcore_index]->h, lentry, next); return lentry; } static void -__list_cache_clean(struct mlx5_list *list, int lcore_index) +__list_cache_clean(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const, + int lcore_index) { - struct mlx5_list_cache *c = list->cache[lcore_index]; + struct mlx5_list_cache *c = l_inconst->cache[lcore_index]; struct mlx5_list_entry *entry = LIST_FIRST(&c->h); uint32_t inv_cnt = __atomic_exchange_n(&c->inv_cnt, 0, __ATOMIC_RELAXED); @@ -145,108 +157,123 @@ __list_cache_clean(struct mlx5_list *list, int lcore_index) if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED) == 0) { LIST_REMOVE(entry, next); - if (list->lcores_share) - list->cb_clone_free(list->ctx, entry); + if (l_const->lcores_share) + l_const->cb_clone_free(l_const->ctx, entry); else - list->cb_remove(list->ctx, entry); + l_const->cb_remove(l_const->ctx, entry); inv_cnt--; } entry = nentry; } } -struct mlx5_list_entry * -mlx5_list_register(struct mlx5_list *list, void *ctx) +static inline struct mlx5_list_entry * +_mlx5_list_register(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const, + void *ctx) { struct mlx5_list_entry *entry, *local_entry; volatile uint32_t prev_gen_cnt = 0; int lcore_index = rte_lcore_index(rte_lcore_id()); - MLX5_ASSERT(list); + MLX5_ASSERT(l_inconst); MLX5_ASSERT(lcore_index < RTE_MAX_LCORE); if (unlikely(lcore_index == -1)) { rte_errno = ENOTSUP; return NULL; } - if (unlikely(!list->cache[lcore_index])) { - list->cache[lcore_index] = mlx5_malloc(0, + if (unlikely(!l_inconst->cache[lcore_index])) { + l_inconst->cache[lcore_index] = mlx5_malloc(0, sizeof(struct mlx5_list_cache), RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); - if (!list->cache[lcore_index]) { + if (!l_inconst->cache[lcore_index]) { rte_errno = ENOMEM; return NULL; } - list->cache[lcore_index]->inv_cnt = 0; - LIST_INIT(&list->cache[lcore_index]->h); + l_inconst->cache[lcore_index]->inv_cnt = 0; + LIST_INIT(&l_inconst->cache[lcore_index]->h); } /* 0. Free entries that was invalidated by other lcores. */ - __list_cache_clean(list, lcore_index); + __list_cache_clean(l_inconst, l_const, lcore_index); /* 1. Lookup in local cache. */ - local_entry = __list_lookup(list, lcore_index, ctx, true); + local_entry = __list_lookup(l_inconst, l_const, lcore_index, ctx, true); if (local_entry) return local_entry; - if (list->lcores_share) { + if (l_const->lcores_share) { /* 2. Lookup with read lock on global list, reuse if found. */ - rte_rwlock_read_lock(&list->lock); - entry = __list_lookup(list, RTE_MAX_LCORE, ctx, true); + rte_rwlock_read_lock(&l_inconst->lock); + entry = __list_lookup(l_inconst, l_const, RTE_MAX_LCORE, + ctx, true); if (likely(entry)) { - rte_rwlock_read_unlock(&list->lock); - return mlx5_list_cache_insert(list, lcore_index, entry, - ctx); + rte_rwlock_read_unlock(&l_inconst->lock); + return mlx5_list_cache_insert(l_inconst, l_const, + lcore_index, + entry, ctx); } - prev_gen_cnt = list->gen_cnt; - rte_rwlock_read_unlock(&list->lock); + prev_gen_cnt = l_inconst->gen_cnt; + rte_rwlock_read_unlock(&l_inconst->lock); } /* 3. Prepare new entry for global list and for cache. */ - entry = list->cb_create(list->ctx, ctx); + entry = l_const->cb_create(l_const->ctx, ctx); if (unlikely(!entry)) return NULL; entry->ref_cnt = 1u; - if (!list->lcores_share) { + if (!l_const->lcores_share) { entry->lcore_idx = (uint32_t)lcore_index; - LIST_INSERT_HEAD(&list->cache[lcore_index]->h, entry, next); - __atomic_add_fetch(&list->count, 1, __ATOMIC_RELAXED); + LIST_INSERT_HEAD(&l_inconst->cache[lcore_index]->h, + entry, next); + __atomic_add_fetch(&l_inconst->count, 1, __ATOMIC_RELAXED); DRV_LOG(DEBUG, "MLX5 list %s c%d entry %p new: %u.", - list->name, lcore_index, (void *)entry, entry->ref_cnt); + l_const->name, lcore_index, + (void *)entry, entry->ref_cnt); return entry; } - local_entry = list->cb_clone(list->ctx, entry, ctx); + local_entry = l_const->cb_clone(l_const->ctx, entry, ctx); if (unlikely(!local_entry)) { - list->cb_remove(list->ctx, entry); + l_const->cb_remove(l_const->ctx, entry); return NULL; } local_entry->ref_cnt = 1u; local_entry->gentry = entry; local_entry->lcore_idx = (uint32_t)lcore_index; - rte_rwlock_write_lock(&list->lock); + rte_rwlock_write_lock(&l_inconst->lock); /* 4. Make sure the same entry was not created before the write lock. */ - if (unlikely(prev_gen_cnt != list->gen_cnt)) { - struct mlx5_list_entry *oentry = __list_lookup(list, + if (unlikely(prev_gen_cnt != l_inconst->gen_cnt)) { + struct mlx5_list_entry *oentry = __list_lookup(l_inconst, + l_const, RTE_MAX_LCORE, ctx, true); if (unlikely(oentry)) { /* 4.5. Found real race!!, reuse the old entry. */ - rte_rwlock_write_unlock(&list->lock); - list->cb_remove(list->ctx, entry); - list->cb_clone_free(list->ctx, local_entry); - return mlx5_list_cache_insert(list, lcore_index, oentry, - ctx); + rte_rwlock_write_unlock(&l_inconst->lock); + l_const->cb_remove(l_const->ctx, entry); + l_const->cb_clone_free(l_const->ctx, local_entry); + return mlx5_list_cache_insert(l_inconst, l_const, + lcore_index, + oentry, ctx); } } /* 5. Update lists. */ - LIST_INSERT_HEAD(&list->cache[RTE_MAX_LCORE]->h, entry, next); - list->gen_cnt++; - rte_rwlock_write_unlock(&list->lock); - LIST_INSERT_HEAD(&list->cache[lcore_index]->h, local_entry, next); - __atomic_add_fetch(&list->count, 1, __ATOMIC_RELAXED); - DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, + LIST_INSERT_HEAD(&l_inconst->cache[RTE_MAX_LCORE]->h, entry, next); + l_inconst->gen_cnt++; + rte_rwlock_write_unlock(&l_inconst->lock); + LIST_INSERT_HEAD(&l_inconst->cache[lcore_index]->h, local_entry, next); + __atomic_add_fetch(&l_inconst->count, 1, __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", l_const->name, (void *)entry, entry->ref_cnt); return local_entry; } -int -mlx5_list_unregister(struct mlx5_list *list, +struct mlx5_list_entry * +mlx5_list_register(struct mlx5_list *list, void *ctx) +{ + return _mlx5_list_register(&list->l_inconst, &list->l_const, ctx); +} + +static inline int +_mlx5_list_unregister(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const, struct mlx5_list_entry *entry) { struct mlx5_list_entry *gentry = entry->gentry; @@ -258,69 +285,77 @@ mlx5_list_unregister(struct mlx5_list *list, MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); if (entry->lcore_idx == (uint32_t)lcore_idx) { LIST_REMOVE(entry, next); - if (list->lcores_share) - list->cb_clone_free(list->ctx, entry); + if (l_const->lcores_share) + l_const->cb_clone_free(l_const->ctx, entry); else - list->cb_remove(list->ctx, entry); + l_const->cb_remove(l_const->ctx, entry); } else if (likely(lcore_idx != -1)) { - __atomic_add_fetch(&list->cache[entry->lcore_idx]->inv_cnt, 1, - __ATOMIC_RELAXED); + __atomic_add_fetch(&l_inconst->cache[entry->lcore_idx]->inv_cnt, + 1, __ATOMIC_RELAXED); } else { return 0; } - if (!list->lcores_share) { - __atomic_sub_fetch(&list->count, 1, __ATOMIC_RELAXED); + if (!l_const->lcores_share) { + __atomic_sub_fetch(&l_inconst->count, 1, __ATOMIC_RELAXED); DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", - list->name, (void *)entry); + l_const->name, (void *)entry); return 0; } if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) return 1; - rte_rwlock_write_lock(&list->lock); + rte_rwlock_write_lock(&l_inconst->lock); if (likely(gentry->ref_cnt == 0)) { LIST_REMOVE(gentry, next); - rte_rwlock_write_unlock(&list->lock); - list->cb_remove(list->ctx, gentry); - __atomic_sub_fetch(&list->count, 1, __ATOMIC_RELAXED); + rte_rwlock_write_unlock(&l_inconst->lock); + l_const->cb_remove(l_const->ctx, gentry); + __atomic_sub_fetch(&l_inconst->count, 1, __ATOMIC_RELAXED); DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", - list->name, (void *)gentry); + l_const->name, (void *)gentry); return 0; } - rte_rwlock_write_unlock(&list->lock); + rte_rwlock_write_unlock(&l_inconst->lock); return 1; } +int +mlx5_list_unregister(struct mlx5_list *list, + struct mlx5_list_entry *entry) +{ + return _mlx5_list_unregister(&list->l_inconst, &list->l_const, entry); +} + static void -mlx5_list_uninit(struct mlx5_list *list) +mlx5_list_uninit(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const) { struct mlx5_list_entry *entry; int i; - MLX5_ASSERT(list); + MLX5_ASSERT(l_inconst); for (i = 0; i <= RTE_MAX_LCORE; i++) { - if (!list->cache[i]) + if (!l_inconst->cache[i]) continue; - while (!LIST_EMPTY(&list->cache[i]->h)) { - entry = LIST_FIRST(&list->cache[i]->h); + while (!LIST_EMPTY(&l_inconst->cache[i]->h)) { + entry = LIST_FIRST(&l_inconst->cache[i]->h); LIST_REMOVE(entry, next); if (i == RTE_MAX_LCORE) { - list->cb_remove(list->ctx, entry); + l_const->cb_remove(l_const->ctx, entry); DRV_LOG(DEBUG, "mlx5 list %s entry %p " - "destroyed.", list->name, + "destroyed.", l_const->name, (void *)entry); } else { - list->cb_clone_free(list->ctx, entry); + l_const->cb_clone_free(l_const->ctx, entry); } } if (i != RTE_MAX_LCORE) - mlx5_free(list->cache[i]); + mlx5_free(l_inconst->cache[i]); } } void mlx5_list_destroy(struct mlx5_list *list) { - mlx5_list_uninit(list); + mlx5_list_uninit(&list->l_inconst, &list->l_const); mlx5_free(list); } @@ -328,7 +363,7 @@ uint32_t mlx5_list_get_entry_num(struct mlx5_list *list) { MLX5_ASSERT(list); - return __atomic_load_n(&list->count, __ATOMIC_RELAXED); + return __atomic_load_n(&list->l_inconst.count, __ATOMIC_RELAXED); } /********************* Hash List **********************/ @@ -347,6 +382,11 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, uint32_t alloc_size; uint32_t i; + if (!cb_match || !cb_create || !cb_remove || !cb_clone || + !cb_clone_free) { + rte_errno = EINVAL; + return NULL; + } /* Align to the next power of 2, 32bits integer is enough now. */ if (!rte_is_power_of_2(size)) { act_size = rte_align32pow2(size); @@ -356,7 +396,7 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, act_size = size; } alloc_size = sizeof(struct mlx5_hlist) + - sizeof(struct mlx5_hlist_bucket) * act_size; + sizeof(struct mlx5_hlist_bucket) * act_size; if (lcores_share) alloc_size += sizeof(struct mlx5_list_cache) * act_size; /* Using zmalloc, then no need to initialize the heads. */ @@ -367,15 +407,21 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, name ? name : "None"); return NULL; } + if (name) + snprintf(h->l_const.name, sizeof(h->l_const.name), "%s", name); + h->l_const.ctx = ctx; + h->l_const.lcores_share = lcores_share; + h->l_const.cb_create = cb_create; + h->l_const.cb_match = cb_match; + h->l_const.cb_remove = cb_remove; + h->l_const.cb_clone = cb_clone; + h->l_const.cb_clone_free = cb_clone_free; h->mask = act_size - 1; - h->lcores_share = lcores_share; h->direct_key = direct_key; gc = (struct mlx5_list_cache *)&h->buckets[act_size]; for (i = 0; i < act_size; i++) { - if (mlx5_list_init(&h->buckets[i].l, name, ctx, lcores_share, - lcores_share ? &gc[i] : NULL, - cb_create, cb_match, cb_remove, cb_clone, - cb_clone_free) != 0) { + if (mlx5_list_init(&h->buckets[i].l, &h->l_const, + lcores_share ? &gc[i] : NULL) != 0) { mlx5_free(h); return NULL; } @@ -385,6 +431,7 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, return h; } + struct mlx5_list_entry * mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx) { @@ -394,7 +441,7 @@ mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx) idx = (uint32_t)(key & h->mask); else idx = rte_hash_crc_8byte(key, 0) & h->mask; - return mlx5_list_lookup(&h->buckets[idx].l, ctx); + return _mlx5_list_lookup(&h->buckets[idx].l, &h->l_const, ctx); } struct mlx5_list_entry* @@ -407,9 +454,9 @@ mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx) idx = (uint32_t)(key & h->mask); else idx = rte_hash_crc_8byte(key, 0) & h->mask; - entry = mlx5_list_register(&h->buckets[idx].l, ctx); + entry = _mlx5_list_register(&h->buckets[idx].l, &h->l_const, ctx); if (likely(entry)) { - if (h->lcores_share) + if (h->l_const.lcores_share) entry->gentry->bucket_idx = idx; else entry->bucket_idx = idx; @@ -420,10 +467,10 @@ mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx) int mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_list_entry *entry) { - uint32_t idx = h->lcores_share ? entry->gentry->bucket_idx : + uint32_t idx = h->l_const.lcores_share ? entry->gentry->bucket_idx : entry->bucket_idx; - return mlx5_list_unregister(&h->buckets[idx].l, entry); + return _mlx5_list_unregister(&h->buckets[idx].l, &h->l_const, entry); } void @@ -432,6 +479,6 @@ mlx5_hlist_destroy(struct mlx5_hlist *h) uint32_t i; for (i = 0; i <= h->mask; i++) - mlx5_list_uninit(&h->buckets[i].l); + mlx5_list_uninit(&h->buckets[i].l, &h->l_const); mlx5_free(h); } diff --git a/drivers/common/mlx5/mlx5_common_utils.h b/drivers/common/mlx5/mlx5_common_utils.h index 979dfafad4..9e8ebe772a 100644 --- a/drivers/common/mlx5/mlx5_common_utils.h +++ b/drivers/common/mlx5/mlx5_common_utils.h @@ -80,6 +80,32 @@ typedef void (*mlx5_list_clone_free_cb)(void *tool_ctx, typedef struct mlx5_list_entry *(*mlx5_list_create_cb)(void *tool_ctx, void *ctx); +/** + * Linked mlx5 list constant object. + */ +struct mlx5_list_const { + char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ + void *ctx; /* user objects target to callback. */ + bool lcores_share; /* Whether to share objects between the lcores. */ + mlx5_list_create_cb cb_create; /**< entry create callback. */ + mlx5_list_match_cb cb_match; /**< entry match callback. */ + mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ + mlx5_list_clone_cb cb_clone; /**< entry clone callback. */ + mlx5_list_clone_free_cb cb_clone_free; + /**< entry clone free callback. */ +}; + +/** + * Linked mlx5 list inconstant data. + */ +struct mlx5_list_inconst { + rte_rwlock_t lock; /* read/write lock. */ + volatile uint32_t gen_cnt; /* List modification may update it. */ + volatile uint32_t count; /* number of entries in list. */ + struct mlx5_list_cache *cache[RTE_MAX_LCORE + 1]; + /* Lcore cache, last index is the global cache. */ +}; + /** * Linked mlx5 list structure. * @@ -96,19 +122,8 @@ typedef struct mlx5_list_entry *(*mlx5_list_create_cb)(void *tool_ctx, * */ struct mlx5_list { - char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ - void *ctx; /* user objects target to callback. */ - bool lcores_share; /* Whether to share objects between the lcores. */ - mlx5_list_create_cb cb_create; /**< entry create callback. */ - mlx5_list_match_cb cb_match; /**< entry match callback. */ - mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ - mlx5_list_clone_cb cb_clone; /**< entry clone callback. */ - mlx5_list_clone_free_cb cb_clone_free; - struct mlx5_list_cache *cache[RTE_MAX_LCORE + 1]; - /* Lcore cache, last index is the global cache. */ - volatile uint32_t gen_cnt; /* List modification may update it. */ - volatile uint32_t count; /* number of entries in list. */ - rte_rwlock_t lock; /* read/write lock. */ + struct mlx5_list_const l_const; + struct mlx5_list_inconst l_inconst; }; /** @@ -214,7 +229,7 @@ mlx5_list_get_entry_num(struct mlx5_list *list); /* Hash list bucket. */ struct mlx5_hlist_bucket { - struct mlx5_list l; + struct mlx5_list_inconst l; } __rte_cache_aligned; /** @@ -226,7 +241,7 @@ struct mlx5_hlist { uint32_t mask; /* A mask for the bucket index range. */ uint8_t flags; bool direct_key; /* Whether to use the key directly as hash index. */ - bool lcores_share; /* Whether to share objects between the lcores. */ + struct mlx5_list_const l_const; /* List constant data. */ struct mlx5_hlist_bucket buckets[] __rte_cache_aligned; }; -- 2.25.1