From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 20729A0C4B; Mon, 12 Jul 2021 03:48:58 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 23BF1411B6; Mon, 12 Jul 2021 03:47:51 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2071.outbound.protection.outlook.com [40.107.244.71]) by mails.dpdk.org (Postfix) with ESMTP id 06336411AA for ; Mon, 12 Jul 2021 03:47:48 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=X81sSQi7K64LdjcWuy93tIFL+EToNrVti+lG8pCK0B+PUg/XwHJPCOfKBeFuqlpTA0mDIGSjnzw5+0G7oTGgWkXNC20vtpMyC0JRwmgdx8+b/ag184P2VO0gsL2XImkss+cN10afaiGtT7kM8lrpKeETZV4tL7SIC+bIOH/OZlmxfVNSJLOdO6gI+XLDk3YwuVjxwXIqWKU70m/F/5S+9WaWdgZj1HVImBcwk0fqvBX7inm5WRciGSHzfnfk8/DxlJNVLBpTLOAttwacSkyXinOAXfLr44BGb+wJK8K3DdGp0/V9mtumzUkK4rpdRr7/6/0i2NUwPCgI9ydsfYqaOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=W8FCoQHhTuwZnTOQA6gXUjvHy58uwpA8JY7dUz3OC58=; b=AX940fSCbxeIthtVSpdDXV8SqNcg5fxzYUzO1yPOXZu+5PK5VY9hvBLHzcBcn1ifXPJvXSvpTOSSq/o7fmpsrsfieN33ftdm4ZhSgP4Yd3Q7XUphLu7lVmdRxoVWxyCpbhyOyKEbDSiuiEpMELhJQ2CiWch3xLWW3hEi1Fc/FiF3YZVkvaycHFdORjthHdr+QQbl53hr/gVwDHBMPZyO2y8gNihxekMfOZ1rjfJw081ql/ePBDMt3CEFmgF5om7W3+rtp1RioyBkFFEN9T7czdLxshHcHhaEAhCVIeXaBbLfsOz/JvvpC+BOKOsmUuOFei7v5sDvO33MuylZASEhjw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=W8FCoQHhTuwZnTOQA6gXUjvHy58uwpA8JY7dUz3OC58=; b=FnKssOfokUZk5oD8vJBvVe6cT8WzbOW2m66GxWe0PobeAYt7lVsoyIrYMNLia5w5gUMptS6X/+7/gm3pUmoXPT2nCch+ATp2UQP7W1q+HTsKda3G7Pw3GeiZXsSvtpIpJgAENqvbwHda1s0HVXU21jnE7wJTZs4P+4u9IL1rQkygNz9NZn2/XBRvOltQhal8gxvI+4CsZ/L0wb/SaKYPVI56JQsYDVr39BBbUOrCvK1yjJQh0IUppNthofvvdVn0sJGcIaZeZ1vKU9yb/ZpxckI/LXjJNgLLgKy+DrZpKt9yeG1w6WjlPAlz5dSKZCWaealKbRaTbkRsxu2Zm2F1AQ== Received: from BN6PR13CA0019.namprd13.prod.outlook.com (2603:10b6:404:10a::29) by DM4PR12MB5373.namprd12.prod.outlook.com (2603:10b6:5:39a::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.21; Mon, 12 Jul 2021 01:47:46 +0000 Received: from BN8NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:404:10a:cafe::ed) by BN6PR13CA0019.outlook.office365.com (2603:10b6:404:10a::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4331.10 via Frontend Transport; Mon, 12 Jul 2021 01:47:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT013.mail.protection.outlook.com (10.13.176.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4308.20 via Frontend Transport; Mon, 12 Jul 2021 01:47:46 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 12 Jul 2021 01:47:33 +0000 From: Suanming Mou To: , CC: , , Date: Mon, 12 Jul 2021 04:46:41 +0300 Message-ID: <20210712014654.32428-14-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210712014654.32428-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210712014654.32428-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cfcb3469-b75f-4fb0-42c9-08d944d70c35 X-MS-TrafficTypeDiagnostic: DM4PR12MB5373: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:33; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: IgWgc245dSuXGWSTJzjWeYtHkWMxe3AdjOep7L2j+E6zgwl1jWVNY1WgV9ASuD8MO/dVodosOyssO+9X77kTbPtpbJL/IYjO+eL12vGPPldc3Hq0ZEur1WRsIW6JU5xY8zfRUjllHSFPdKbGDNXIJhG1G8dJRmFmKsEBG7KJT0nwuzZ5/p5W78CtMKufp6YCwNazEHccQdxqkAwdpN5VUfhqFYfdHPmJwc77wSjPga5mMeDss79MS1HMU40lgHHmTkfSqM6EWiwlWM9hMhpGGFY39YhLIeH8bbLRrD9Si1Jg5tiV5vhPJa9S+MSL+b7vuqC3OGSZnYE4HEJ4Fek9kDxETBpRIO5g+j6IFW2+6S+K62SDD6+Sq7tFw/7h2uD1n32WLdH99b6gHWeUq8ONc/35u8Q54X/TdGdOYCiYlPcsxz+YwFjgLZnFFk3yu8mCmiJsR8BfcA4i7klzkfVrA1wlLOj1JfzK8qIfN9W+P12cI+ASH3a33t6wrgX2cBIVsh8u67uAjl7IV7R4G2StTfhHhvPXSyoWUlpfk6W0QraTkG8RkFUvpHdIQH4IuePhVVbhA+LLVg/SiTsXomSoYHLDE8k+QCCUATUiRN0iAwcdVxo3HEYOeub8uhNgUnWwPg4oL6/V0If2PePufW/YxurKeOIE9M+luGxymcGNiR0Hk2ZYJiN5/NpadEUPO3A+W21sEs7XyDfyq6jz/JSFnVyMlvjzaNknSM+S1Pk5/Xc= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(396003)(346002)(376002)(39860400002)(36840700001)(46966006)(82740400003)(186003)(4326008)(16526019)(5660300002)(47076005)(2906002)(30864003)(70586007)(6666004)(36860700001)(55016002)(6286002)(26005)(478600001)(70206006)(7696005)(83380400001)(86362001)(36906005)(316002)(2616005)(1076003)(6636002)(54906003)(110136005)(336012)(426003)(36756003)(356005)(34020700004)(82310400003)(8676002)(8936002)(7636003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jul 2021 01:47:46.1281 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cfcb3469-b75f-4fb0-42c9-08d944d70c35 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5373 Subject: [dpdk-dev] [PATCH v5 13/26] common/mlx5: move list utility to common X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hash list is planned to be implemented with the cache list code. This commit moves the list utility to common directory. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common.h | 2 + drivers/common/mlx5/mlx5_common_utils.c | 250 +++++++++++++++++++++++ drivers/common/mlx5/mlx5_common_utils.h | 205 +++++++++++++++++++ drivers/common/mlx5/version.map | 7 + drivers/net/mlx5/mlx5_utils.c | 251 ------------------------ drivers/net/mlx5/mlx5_utils.h | 197 ------------------- 6 files changed, 464 insertions(+), 448 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index 306f2f1ab7..7fb7d40b38 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -14,6 +14,8 @@ #include #include #include +#include +#include #include #include "mlx5_prm.h" diff --git a/drivers/common/mlx5/mlx5_common_utils.c b/drivers/common/mlx5/mlx5_common_utils.c index ad2011e858..8bb8a6016d 100644 --- a/drivers/common/mlx5/mlx5_common_utils.c +++ b/drivers/common/mlx5/mlx5_common_utils.c @@ -11,6 +11,256 @@ #include "mlx5_common_utils.h" #include "mlx5_common_log.h" +/********************* mlx5 list ************************/ + +struct mlx5_list * +mlx5_list_create(const char *name, void *ctx, + mlx5_list_create_cb cb_create, + mlx5_list_match_cb cb_match, + mlx5_list_remove_cb cb_remove, + mlx5_list_clone_cb cb_clone, + mlx5_list_clone_free_cb cb_clone_free) +{ + struct mlx5_list *list; + int i; + + if (!cb_match || !cb_create || !cb_remove || !cb_clone || + !cb_clone_free) { + rte_errno = EINVAL; + return NULL; + } + list = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*list), 0, SOCKET_ID_ANY); + if (!list) + return NULL; + if (name) + snprintf(list->name, sizeof(list->name), "%s", name); + list->ctx = ctx; + list->cb_create = cb_create; + list->cb_match = cb_match; + list->cb_remove = cb_remove; + list->cb_clone = cb_clone; + list->cb_clone_free = cb_clone_free; + rte_rwlock_init(&list->lock); + DRV_LOG(DEBUG, "mlx5 list %s initialized.", list->name); + for (i = 0; i <= RTE_MAX_LCORE; i++) + LIST_INIT(&list->cache[i].h); + return list; +} + +static struct mlx5_list_entry * +__list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) +{ + struct mlx5_list_entry *entry = LIST_FIRST(&list->cache[lcore_index].h); + uint32_t ret; + + while (entry != NULL) { + if (list->cb_match(list, entry, ctx) == 0) { + if (reuse) { + ret = __atomic_add_fetch(&entry->ref_cnt, 1, + __ATOMIC_RELAXED) - 1; + DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", + list->name, (void *)entry, + entry->ref_cnt); + } else if (lcore_index < RTE_MAX_LCORE) { + ret = __atomic_load_n(&entry->ref_cnt, + __ATOMIC_RELAXED); + } + if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) + return entry; + if (reuse && ret == 0) + entry->ref_cnt--; /* Invalid entry. */ + } + entry = LIST_NEXT(entry, next); + } + return NULL; +} + +struct mlx5_list_entry * +mlx5_list_lookup(struct mlx5_list *list, void *ctx) +{ + struct mlx5_list_entry *entry = NULL; + int i; + + rte_rwlock_read_lock(&list->lock); + for (i = 0; i < RTE_MAX_LCORE; i++) { + entry = __list_lookup(list, i, ctx, false); + if (entry) + break; + } + rte_rwlock_read_unlock(&list->lock); + return entry; +} + +static struct mlx5_list_entry * +mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, + struct mlx5_list_entry *gentry, void *ctx) +{ + struct mlx5_list_entry *lentry = list->cb_clone(list, gentry, ctx); + + if (unlikely(!lentry)) + return NULL; + lentry->ref_cnt = 1u; + lentry->gentry = gentry; + lentry->lcore_idx = (uint32_t)lcore_index; + LIST_INSERT_HEAD(&list->cache[lcore_index].h, lentry, next); + return lentry; +} + +static void +__list_cache_clean(struct mlx5_list *list, int lcore_index) +{ + struct mlx5_list_cache *c = &list->cache[lcore_index]; + struct mlx5_list_entry *entry = LIST_FIRST(&c->h); + uint32_t inv_cnt = __atomic_exchange_n(&c->inv_cnt, 0, + __ATOMIC_RELAXED); + + while (inv_cnt != 0 && entry != NULL) { + struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); + + if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED) == 0) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + inv_cnt--; + } + entry = nentry; + } +} + +struct mlx5_list_entry * +mlx5_list_register(struct mlx5_list *list, void *ctx) +{ + struct mlx5_list_entry *entry, *local_entry; + volatile uint32_t prev_gen_cnt = 0; + int lcore_index = rte_lcore_index(rte_lcore_id()); + + MLX5_ASSERT(list); + MLX5_ASSERT(lcore_index < RTE_MAX_LCORE); + if (unlikely(lcore_index == -1)) { + rte_errno = ENOTSUP; + return NULL; + } + /* 0. Free entries that was invalidated by other lcores. */ + __list_cache_clean(list, lcore_index); + /* 1. Lookup in local cache. */ + local_entry = __list_lookup(list, lcore_index, ctx, true); + if (local_entry) + return local_entry; + /* 2. Lookup with read lock on global list, reuse if found. */ + rte_rwlock_read_lock(&list->lock); + entry = __list_lookup(list, RTE_MAX_LCORE, ctx, true); + if (likely(entry)) { + rte_rwlock_read_unlock(&list->lock); + return mlx5_list_cache_insert(list, lcore_index, entry, ctx); + } + prev_gen_cnt = list->gen_cnt; + rte_rwlock_read_unlock(&list->lock); + /* 3. Prepare new entry for global list and for cache. */ + entry = list->cb_create(list, entry, ctx); + if (unlikely(!entry)) + return NULL; + local_entry = list->cb_clone(list, entry, ctx); + if (unlikely(!local_entry)) { + list->cb_remove(list, entry); + return NULL; + } + entry->ref_cnt = 1u; + local_entry->ref_cnt = 1u; + local_entry->gentry = entry; + local_entry->lcore_idx = (uint32_t)lcore_index; + rte_rwlock_write_lock(&list->lock); + /* 4. Make sure the same entry was not created before the write lock. */ + if (unlikely(prev_gen_cnt != list->gen_cnt)) { + struct mlx5_list_entry *oentry = __list_lookup(list, + RTE_MAX_LCORE, + ctx, true); + + if (unlikely(oentry)) { + /* 4.5. Found real race!!, reuse the old entry. */ + rte_rwlock_write_unlock(&list->lock); + list->cb_remove(list, entry); + list->cb_clone_free(list, local_entry); + return mlx5_list_cache_insert(list, lcore_index, oentry, + ctx); + } + } + /* 5. Update lists. */ + LIST_INSERT_HEAD(&list->cache[RTE_MAX_LCORE].h, entry, next); + list->gen_cnt++; + rte_rwlock_write_unlock(&list->lock); + LIST_INSERT_HEAD(&list->cache[lcore_index].h, local_entry, next); + __atomic_add_fetch(&list->count, 1, __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, + (void *)entry, entry->ref_cnt); + return local_entry; +} + +int +mlx5_list_unregister(struct mlx5_list *list, + struct mlx5_list_entry *entry) +{ + struct mlx5_list_entry *gentry = entry->gentry; + int lcore_idx; + + if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) + return 1; + lcore_idx = rte_lcore_index(rte_lcore_id()); + MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); + if (entry->lcore_idx == (uint32_t)lcore_idx) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + } else if (likely(lcore_idx != -1)) { + __atomic_add_fetch(&list->cache[entry->lcore_idx].inv_cnt, 1, + __ATOMIC_RELAXED); + } else { + return 0; + } + if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) + return 1; + rte_rwlock_write_lock(&list->lock); + if (likely(gentry->ref_cnt == 0)) { + LIST_REMOVE(gentry, next); + rte_rwlock_write_unlock(&list->lock); + list->cb_remove(list, gentry); + __atomic_sub_fetch(&list->count, 1, __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", + list->name, (void *)gentry); + return 0; + } + rte_rwlock_write_unlock(&list->lock); + return 1; +} + +void +mlx5_list_destroy(struct mlx5_list *list) +{ + struct mlx5_list_entry *entry; + int i; + + MLX5_ASSERT(list); + for (i = 0; i <= RTE_MAX_LCORE; i++) { + while (!LIST_EMPTY(&list->cache[i].h)) { + entry = LIST_FIRST(&list->cache[i].h); + LIST_REMOVE(entry, next); + if (i == RTE_MAX_LCORE) { + list->cb_remove(list, entry); + DRV_LOG(DEBUG, "mlx5 list %s entry %p " + "destroyed.", list->name, + (void *)entry); + } else { + list->cb_clone_free(list, entry); + } + } + } + mlx5_free(list); +} + +uint32_t +mlx5_list_get_entry_num(struct mlx5_list *list) +{ + MLX5_ASSERT(list); + return __atomic_load_n(&list->count, __ATOMIC_RELAXED); +} + /********************* Hash List **********************/ static struct mlx5_hlist_entry * diff --git a/drivers/common/mlx5/mlx5_common_utils.h b/drivers/common/mlx5/mlx5_common_utils.h index ed378ce9bd..96add6d003 100644 --- a/drivers/common/mlx5/mlx5_common_utils.h +++ b/drivers/common/mlx5/mlx5_common_utils.h @@ -7,6 +7,211 @@ #include "mlx5_common.h" +/************************ mlx5 list *****************************/ + +/** Maximum size of string for naming. */ +#define MLX5_NAME_SIZE 32 + +struct mlx5_list; + +/** + * Structure of the entry in the mlx5 list, user should define its own struct + * that contains this in order to store the data. + */ +struct mlx5_list_entry { + LIST_ENTRY(mlx5_list_entry) next; /* Entry pointers in the list. */ + uint32_t ref_cnt; /* 0 means, entry is invalid. */ + uint32_t lcore_idx; + struct mlx5_list_entry *gentry; +}; + +struct mlx5_list_cache { + LIST_HEAD(mlx5_list_head, mlx5_list_entry) h; + uint32_t inv_cnt; /* Invalid entries counter. */ +} __rte_cache_aligned; + +/** + * Type of callback function for entry removal. + * + * @param list + * The mlx5 list. + * @param entry + * The entry in the list. + */ +typedef void (*mlx5_list_remove_cb)(struct mlx5_list *list, + struct mlx5_list_entry *entry); + +/** + * Type of function for user defined matching. + * + * @param list + * The mlx5 list. + * @param entry + * The entry in the list. + * @param ctx + * The pointer to new entry context. + * + * @return + * 0 if matching, non-zero number otherwise. + */ +typedef int (*mlx5_list_match_cb)(struct mlx5_list *list, + struct mlx5_list_entry *entry, void *ctx); + +typedef struct mlx5_list_entry *(*mlx5_list_clone_cb) + (struct mlx5_list *list, + struct mlx5_list_entry *entry, void *ctx); + +typedef void (*mlx5_list_clone_free_cb)(struct mlx5_list *list, + struct mlx5_list_entry *entry); + +/** + * Type of function for user defined mlx5 list entry creation. + * + * @param list + * The mlx5 list. + * @param entry + * The new allocated entry, NULL if list entry size unspecified, + * New entry has to be allocated in callback and return. + * @param ctx + * The pointer to new entry context. + * + * @return + * Pointer of entry on success, NULL otherwise. + */ +typedef struct mlx5_list_entry *(*mlx5_list_create_cb) + (struct mlx5_list *list, + struct mlx5_list_entry *entry, + void *ctx); + +/** + * Linked mlx5 list structure. + * + * Entry in mlx5 list could be reused if entry already exists, + * reference count will increase and the existing entry returns. + * + * When destroy an entry from list, decrease reference count and only + * destroy when no further reference. + * + * Linked list is designed for limited number of entries, + * read mostly, less modification. + * + * For huge amount of entries, please consider hash list. + * + */ +struct mlx5_list { + char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ + volatile uint32_t gen_cnt; + /* List modification will update generation count. */ + volatile uint32_t count; /* number of entries in list. */ + void *ctx; /* user objects target to callback. */ + rte_rwlock_t lock; /* read/write lock. */ + mlx5_list_create_cb cb_create; /**< entry create callback. */ + mlx5_list_match_cb cb_match; /**< entry match callback. */ + mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ + mlx5_list_clone_cb cb_clone; /**< entry clone callback. */ + mlx5_list_clone_free_cb cb_clone_free; + struct mlx5_list_cache cache[RTE_MAX_LCORE + 1]; + /* Lcore cache, last index is the global cache. */ +}; + +/** + * Create a mlx5 list. + * + * @param list + * Pointer to the hast list table. + * @param name + * Name of the mlx5 list. + * @param ctx + * Pointer to the list context data. + * @param cb_create + * Callback function for entry create. + * @param cb_match + * Callback function for entry match. + * @param cb_remove + * Callback function for entry remove. + * @return + * List pointer on success, otherwise NULL. + */ +__rte_internal +struct mlx5_list *mlx5_list_create(const char *name, void *ctx, + mlx5_list_create_cb cb_create, + mlx5_list_match_cb cb_match, + mlx5_list_remove_cb cb_remove, + mlx5_list_clone_cb cb_clone, + mlx5_list_clone_free_cb cb_clone_free); + +/** + * Search an entry matching the key. + * + * Result returned might be destroyed by other thread, must use + * this function only in main thread. + * + * @param list + * Pointer to the mlx5 list. + * @param ctx + * Common context parameter used by entry callback function. + * + * @return + * Pointer of the list entry if found, NULL otherwise. + */ +__rte_internal +struct mlx5_list_entry *mlx5_list_lookup(struct mlx5_list *list, + void *ctx); + +/** + * Reuse or create an entry to the mlx5 list. + * + * @param list + * Pointer to the hast list table. + * @param ctx + * Common context parameter used by callback function. + * + * @return + * registered entry on success, NULL otherwise + */ +__rte_internal +struct mlx5_list_entry *mlx5_list_register(struct mlx5_list *list, + void *ctx); + +/** + * Remove an entry from the mlx5 list. + * + * User should guarantee the validity of the entry. + * + * @param list + * Pointer to the hast list. + * @param entry + * Entry to be removed from the mlx5 list table. + * @return + * 0 on entry removed, 1 on entry still referenced. + */ +__rte_internal +int mlx5_list_unregister(struct mlx5_list *list, + struct mlx5_list_entry *entry); + +/** + * Destroy the mlx5 list. + * + * @param list + * Pointer to the mlx5 list. + */ +__rte_internal +void mlx5_list_destroy(struct mlx5_list *list); + +/** + * Get entry number from the mlx5 list. + * + * @param list + * Pointer to the hast list. + * @return + * mlx5 list entry number. + */ +__rte_internal +uint32_t +mlx5_list_get_entry_num(struct mlx5_list *list); + +/************************ Hash list *****************************/ + #define MLX5_HLIST_DIRECT_KEY 0x0001 /* Use the key directly as hash index. */ #define MLX5_HLIST_WRITE_MOST 0x0002 /* List mostly used for append new. */ diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index b8be73a77b..e6586d6f6f 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -73,6 +73,13 @@ INTERNAL { mlx5_glue; + mlx5_list_create; + mlx5_list_register; + mlx5_list_unregister; + mlx5_list_lookup; + mlx5_list_get_entry_num; + mlx5_list_destroy; + mlx5_hlist_create; mlx5_hlist_lookup; mlx5_hlist_register; diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 0be778935f..e4e66ae4c5 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -8,257 +8,6 @@ #include "mlx5_utils.h" - -/********************* mlx5 list ************************/ - -struct mlx5_list * -mlx5_list_create(const char *name, void *ctx, - mlx5_list_create_cb cb_create, - mlx5_list_match_cb cb_match, - mlx5_list_remove_cb cb_remove, - mlx5_list_clone_cb cb_clone, - mlx5_list_clone_free_cb cb_clone_free) -{ - struct mlx5_list *list; - int i; - - if (!cb_match || !cb_create || !cb_remove || !cb_clone || - !cb_clone_free) { - rte_errno = EINVAL; - return NULL; - } - list = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*list), 0, SOCKET_ID_ANY); - if (!list) - return NULL; - if (name) - snprintf(list->name, sizeof(list->name), "%s", name); - list->ctx = ctx; - list->cb_create = cb_create; - list->cb_match = cb_match; - list->cb_remove = cb_remove; - list->cb_clone = cb_clone; - list->cb_clone_free = cb_clone_free; - rte_rwlock_init(&list->lock); - DRV_LOG(DEBUG, "mlx5 list %s initialized.", list->name); - for (i = 0; i <= RTE_MAX_LCORE; i++) - LIST_INIT(&list->cache[i].h); - return list; -} - -static struct mlx5_list_entry * -__list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) -{ - struct mlx5_list_entry *entry = LIST_FIRST(&list->cache[lcore_index].h); - uint32_t ret; - - while (entry != NULL) { - if (list->cb_match(list, entry, ctx) == 0) { - if (reuse) { - ret = __atomic_add_fetch(&entry->ref_cnt, 1, - __ATOMIC_RELAXED) - 1; - DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", - list->name, (void *)entry, - entry->ref_cnt); - } else if (lcore_index < RTE_MAX_LCORE) { - ret = __atomic_load_n(&entry->ref_cnt, - __ATOMIC_RELAXED); - } - if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) - return entry; - if (reuse && ret == 0) - entry->ref_cnt--; /* Invalid entry. */ - } - entry = LIST_NEXT(entry, next); - } - return NULL; -} - -struct mlx5_list_entry * -mlx5_list_lookup(struct mlx5_list *list, void *ctx) -{ - struct mlx5_list_entry *entry = NULL; - int i; - - rte_rwlock_read_lock(&list->lock); - for (i = 0; i < RTE_MAX_LCORE; i++) { - entry = __list_lookup(list, i, ctx, false); - if (entry) - break; - } - rte_rwlock_read_unlock(&list->lock); - return entry; -} - -static struct mlx5_list_entry * -mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, - struct mlx5_list_entry *gentry, void *ctx) -{ - struct mlx5_list_entry *lentry = list->cb_clone(list, gentry, ctx); - - if (unlikely(!lentry)) - return NULL; - lentry->ref_cnt = 1u; - lentry->gentry = gentry; - lentry->lcore_idx = (uint32_t)lcore_index; - LIST_INSERT_HEAD(&list->cache[lcore_index].h, lentry, next); - return lentry; -} - -static void -__list_cache_clean(struct mlx5_list *list, int lcore_index) -{ - struct mlx5_list_cache *c = &list->cache[lcore_index]; - struct mlx5_list_entry *entry = LIST_FIRST(&c->h); - uint32_t inv_cnt = __atomic_exchange_n(&c->inv_cnt, 0, - __ATOMIC_RELAXED); - - while (inv_cnt != 0 && entry != NULL) { - struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); - - if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED) == 0) { - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - inv_cnt--; - } - entry = nentry; - } -} - -struct mlx5_list_entry * -mlx5_list_register(struct mlx5_list *list, void *ctx) -{ - struct mlx5_list_entry *entry, *local_entry; - volatile uint32_t prev_gen_cnt = 0; - int lcore_index = rte_lcore_index(rte_lcore_id()); - - MLX5_ASSERT(list); - MLX5_ASSERT(lcore_index < RTE_MAX_LCORE); - if (unlikely(lcore_index == -1)) { - rte_errno = ENOTSUP; - return NULL; - } - /* 0. Free entries that was invalidated by other lcores. */ - __list_cache_clean(list, lcore_index); - /* 1. Lookup in local cache. */ - local_entry = __list_lookup(list, lcore_index, ctx, true); - if (local_entry) - return local_entry; - /* 2. Lookup with read lock on global list, reuse if found. */ - rte_rwlock_read_lock(&list->lock); - entry = __list_lookup(list, RTE_MAX_LCORE, ctx, true); - if (likely(entry)) { - rte_rwlock_read_unlock(&list->lock); - return mlx5_list_cache_insert(list, lcore_index, entry, ctx); - } - prev_gen_cnt = list->gen_cnt; - rte_rwlock_read_unlock(&list->lock); - /* 3. Prepare new entry for global list and for cache. */ - entry = list->cb_create(list, entry, ctx); - if (unlikely(!entry)) - return NULL; - local_entry = list->cb_clone(list, entry, ctx); - if (unlikely(!local_entry)) { - list->cb_remove(list, entry); - return NULL; - } - entry->ref_cnt = 1u; - local_entry->ref_cnt = 1u; - local_entry->gentry = entry; - local_entry->lcore_idx = (uint32_t)lcore_index; - rte_rwlock_write_lock(&list->lock); - /* 4. Make sure the same entry was not created before the write lock. */ - if (unlikely(prev_gen_cnt != list->gen_cnt)) { - struct mlx5_list_entry *oentry = __list_lookup(list, - RTE_MAX_LCORE, - ctx, true); - - if (unlikely(oentry)) { - /* 4.5. Found real race!!, reuse the old entry. */ - rte_rwlock_write_unlock(&list->lock); - list->cb_remove(list, entry); - list->cb_clone_free(list, local_entry); - return mlx5_list_cache_insert(list, lcore_index, oentry, - ctx); - } - } - /* 5. Update lists. */ - LIST_INSERT_HEAD(&list->cache[RTE_MAX_LCORE].h, entry, next); - list->gen_cnt++; - rte_rwlock_write_unlock(&list->lock); - LIST_INSERT_HEAD(&list->cache[lcore_index].h, local_entry, next); - __atomic_add_fetch(&list->count, 1, __ATOMIC_RELAXED); - DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, - (void *)entry, entry->ref_cnt); - return local_entry; -} - -int -mlx5_list_unregister(struct mlx5_list *list, - struct mlx5_list_entry *entry) -{ - struct mlx5_list_entry *gentry = entry->gentry; - int lcore_idx; - - if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) - return 1; - lcore_idx = rte_lcore_index(rte_lcore_id()); - MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); - if (entry->lcore_idx == (uint32_t)lcore_idx) { - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - } else if (likely(lcore_idx != -1)) { - __atomic_add_fetch(&list->cache[entry->lcore_idx].inv_cnt, 1, - __ATOMIC_RELAXED); - } else { - return 0; - } - if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) - return 1; - rte_rwlock_write_lock(&list->lock); - if (likely(gentry->ref_cnt == 0)) { - LIST_REMOVE(gentry, next); - rte_rwlock_write_unlock(&list->lock); - list->cb_remove(list, gentry); - __atomic_sub_fetch(&list->count, 1, __ATOMIC_RELAXED); - DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", - list->name, (void *)gentry); - return 0; - } - rte_rwlock_write_unlock(&list->lock); - return 1; -} - -void -mlx5_list_destroy(struct mlx5_list *list) -{ - struct mlx5_list_entry *entry; - int i; - - MLX5_ASSERT(list); - for (i = 0; i <= RTE_MAX_LCORE; i++) { - while (!LIST_EMPTY(&list->cache[i].h)) { - entry = LIST_FIRST(&list->cache[i].h); - LIST_REMOVE(entry, next); - if (i == RTE_MAX_LCORE) { - list->cb_remove(list, entry); - DRV_LOG(DEBUG, "mlx5 list %s entry %p " - "destroyed.", list->name, - (void *)entry); - } else { - list->cb_clone_free(list, entry); - } - } - } - mlx5_free(list); -} - -uint32_t -mlx5_list_get_entry_num(struct mlx5_list *list) -{ - MLX5_ASSERT(list); - return __atomic_load_n(&list->count, __ATOMIC_RELAXED); -} - /********************* Indexed pool **********************/ static inline void diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index ea64bb75c9..cf3db89403 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -297,203 +297,6 @@ log2above(unsigned int v) return l + r; } -/************************ mlx5 list *****************************/ - -/** Maximum size of string for naming. */ -#define MLX5_NAME_SIZE 32 - -struct mlx5_list; - -/** - * Structure of the entry in the mlx5 list, user should define its own struct - * that contains this in order to store the data. - */ -struct mlx5_list_entry { - LIST_ENTRY(mlx5_list_entry) next; /* Entry pointers in the list. */ - uint32_t ref_cnt; /* 0 means, entry is invalid. */ - uint32_t lcore_idx; - struct mlx5_list_entry *gentry; -}; - -struct mlx5_list_cache { - LIST_HEAD(mlx5_list_head, mlx5_list_entry) h; - uint32_t inv_cnt; /* Invalid entries counter. */ -} __rte_cache_aligned; - -/** - * Type of callback function for entry removal. - * - * @param list - * The mlx5 list. - * @param entry - * The entry in the list. - */ -typedef void (*mlx5_list_remove_cb)(struct mlx5_list *list, - struct mlx5_list_entry *entry); - -/** - * Type of function for user defined matching. - * - * @param list - * The mlx5 list. - * @param entry - * The entry in the list. - * @param ctx - * The pointer to new entry context. - * - * @return - * 0 if matching, non-zero number otherwise. - */ -typedef int (*mlx5_list_match_cb)(struct mlx5_list *list, - struct mlx5_list_entry *entry, void *ctx); - -typedef struct mlx5_list_entry *(*mlx5_list_clone_cb) - (struct mlx5_list *list, - struct mlx5_list_entry *entry, void *ctx); - -typedef void (*mlx5_list_clone_free_cb)(struct mlx5_list *list, - struct mlx5_list_entry *entry); - -/** - * Type of function for user defined mlx5 list entry creation. - * - * @param list - * The mlx5 list. - * @param entry - * The new allocated entry, NULL if list entry size unspecified, - * New entry has to be allocated in callback and return. - * @param ctx - * The pointer to new entry context. - * - * @return - * Pointer of entry on success, NULL otherwise. - */ -typedef struct mlx5_list_entry *(*mlx5_list_create_cb) - (struct mlx5_list *list, - struct mlx5_list_entry *entry, - void *ctx); - -/** - * Linked mlx5 list structure. - * - * Entry in mlx5 list could be reused if entry already exists, - * reference count will increase and the existing entry returns. - * - * When destroy an entry from list, decrease reference count and only - * destroy when no further reference. - * - * Linked list is designed for limited number of entries, - * read mostly, less modification. - * - * For huge amount of entries, please consider hash list. - * - */ -struct mlx5_list { - char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ - volatile uint32_t gen_cnt; - /* List modification will update generation count. */ - volatile uint32_t count; /* number of entries in list. */ - void *ctx; /* user objects target to callback. */ - rte_rwlock_t lock; /* read/write lock. */ - mlx5_list_create_cb cb_create; /**< entry create callback. */ - mlx5_list_match_cb cb_match; /**< entry match callback. */ - mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ - mlx5_list_clone_cb cb_clone; /**< entry clone callback. */ - mlx5_list_clone_free_cb cb_clone_free; - struct mlx5_list_cache cache[RTE_MAX_LCORE + 1]; - /* Lcore cache, last index is the global cache. */ -}; - -/** - * Create a mlx5 list. - * - * @param list - * Pointer to the hast list table. - * @param name - * Name of the mlx5 list. - * @param ctx - * Pointer to the list context data. - * @param cb_create - * Callback function for entry create. - * @param cb_match - * Callback function for entry match. - * @param cb_remove - * Callback function for entry remove. - * @return - * List pointer on success, otherwise NULL. - */ -struct mlx5_list *mlx5_list_create(const char *name, void *ctx, - mlx5_list_create_cb cb_create, - mlx5_list_match_cb cb_match, - mlx5_list_remove_cb cb_remove, - mlx5_list_clone_cb cb_clone, - mlx5_list_clone_free_cb cb_clone_free); - -/** - * Search an entry matching the key. - * - * Result returned might be destroyed by other thread, must use - * this function only in main thread. - * - * @param list - * Pointer to the mlx5 list. - * @param ctx - * Common context parameter used by entry callback function. - * - * @return - * Pointer of the list entry if found, NULL otherwise. - */ -struct mlx5_list_entry *mlx5_list_lookup(struct mlx5_list *list, - void *ctx); - -/** - * Reuse or create an entry to the mlx5 list. - * - * @param list - * Pointer to the hast list table. - * @param ctx - * Common context parameter used by callback function. - * - * @return - * registered entry on success, NULL otherwise - */ -struct mlx5_list_entry *mlx5_list_register(struct mlx5_list *list, - void *ctx); - -/** - * Remove an entry from the mlx5 list. - * - * User should guarantee the validity of the entry. - * - * @param list - * Pointer to the hast list. - * @param entry - * Entry to be removed from the mlx5 list table. - * @return - * 0 on entry removed, 1 on entry still referenced. - */ -int mlx5_list_unregister(struct mlx5_list *list, - struct mlx5_list_entry *entry); - -/** - * Destroy the mlx5 list. - * - * @param list - * Pointer to the mlx5 list. - */ -void mlx5_list_destroy(struct mlx5_list *list); - -/** - * Get entry number from the mlx5 list. - * - * @param list - * Pointer to the hast list. - * @return - * mlx5 list entry number. - */ -uint32_t -mlx5_list_get_entry_num(struct mlx5_list *list); - /********************************* indexed pool *************************/ /** -- 2.25.1