From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 77451A0C4D; Mon, 12 Jul 2021 03:48:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E009D411A5; Mon, 12 Jul 2021 03:47:47 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2088.outbound.protection.outlook.com [40.107.92.88]) by mails.dpdk.org (Postfix) with ESMTP id 98D3941194 for ; Mon, 12 Jul 2021 03:47:43 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FAY5AtsqbZhbjXPf9iMnusDC1rT6Q7H2+XdjDfZkY2ev0q3xQ2p/fKb2mj2FGsIzDSSEXxkX7JxZ4UIalOA6R48hGzmPfNuAQzGsN0hNbat7s6wQR6PiyEqizodZREBrphN+fLRpRDjZh/VDheisVdVsM59Z0sz4P+UH6BcRIoImE8w24GypGoCyjlZ33jSiDPKolNO7EI944wEs98D7+56V9hKMAjTcTKhaFxC7eNgxdgDFfnHIdcDf/Ld4AXlF50fvDVSkAPvnyFu4zeJaBHgu7WstUAd8ITU2/+gc01qPpJh+evInsc45+9aDseGMQ7jpMUleT2P39x+poOdzUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=W/9AWgT0InHapbcAu1hNVd8RMpfuFNRiliLTBQNX1v8=; b=lVVf8lozwLUArfzaoakQiacctB/fPVcTHLAsoX2mRXKmdCg02W4NbeOWEMKntLLHiSYCfw7UqS507T5xJwusrKSBgZr2121/j67HInjJ6AjO8oupAPOsFIgfJkr+olO3NIZ2VQ6xB3dZjY9iw1H+FBlTCEppl6BUKpwf/mlV28YDDgSqcMTSNhDbNFnL3Rg9bK2EEeuCl92WUwgp3E3XM6+KGNzUwnDyQuqcSV9L3QIlw/xLP20E3rAFqdKOuGvKKQjVJoRsy/Gq5eimUNbDnuC79DyFvn3prlKBoZ6Or77Ct9JXZYObmGpwHwp0SE2CjHJZMiiMPv40YJZHxV6nTQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=W/9AWgT0InHapbcAu1hNVd8RMpfuFNRiliLTBQNX1v8=; b=LnXzyKL1vvYfm3uEDAXl+9Pb+HhIwDmwpu5VOUGgtqWr//p9L5dWG+4O8NGkhydaJTNHUocThalcUWCVqah8VPRGvwQL3iLRkx36xPWx2jUxo1nj1b+SASSCAogmfMDQVjjhGWAJKW9S58xdRYny72CpBSA2hfN88vXR2YHXst1qDLYor/K69NwBNAf0IxVNEvW+IRRWaJRoD/D4xFtvYExU8vrNS7wsJvGB3WR6jWQgxXi79HFmxroH6KcLx2a9WVEJpNuDJCp9CtZdKMz2GQW4lUj45o4vrX7GPOnSqgCfP6W87EWsGkemQxwaEC87WdDvnRcKYfvBWiZShkfeag== Received: from BN6PR13CA0017.namprd13.prod.outlook.com (2603:10b6:404:10a::27) by SA0PR12MB4575.namprd12.prod.outlook.com (2603:10b6:806:73::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.21; Mon, 12 Jul 2021 01:47:42 +0000 Received: from BN8NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:404:10a:cafe::58) by BN6PR13CA0017.outlook.office365.com (2603:10b6:404:10a::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4331.10 via Frontend Transport; Mon, 12 Jul 2021 01:47:42 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT013.mail.protection.outlook.com (10.13.176.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4308.20 via Frontend Transport; Mon, 12 Jul 2021 01:47:42 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 12 Jul 2021 01:47:28 +0000 From: Suanming Mou To: , CC: , , Date: Mon, 12 Jul 2021 04:46:38 +0300 Message-ID: <20210712014654.32428-11-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210712014654.32428-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210712014654.32428-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 86d05b19-f404-40fb-c87b-08d944d709de X-MS-TrafficTypeDiagnostic: SA0PR12MB4575: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:326; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: jjTBrCCUCqfxokIpclapSKHoKITOKOdnCD/FngWlXkUOE2FLMMi2OAJguUlZXnpYI1c5AWn2HNwy+ULS2GX69fkSLQXKRLfcfn72a33l/1ctyFWF41T7snv5sZvIruGGMIWOzCIewb0cZCchEyozJfPV1MaI3Mmd1bgv66WZQKzyAgWjeJQcmps8d/TETtAouEjrW1CWyBxGBKtWAJlspOuLONu0p5ZEcajbZc2emLhdyvWxUOcsqugUUrIhdLfhh4IzNcp0hoYfk8KvEIldJBOhiBGMZt3obcCW7j9AtZK4Pzs5NokaxFbvFtNGabKQBvqHb1U+UUX2A1/bfERqM+5vzGU1XTG5TKJ2LgBxsDY5dM0VsLcfKDDpB737VaEyhwoLkc7NrmekH/JwmpriobXP05ERDM3dg1DorC/WwJEOygq9C2wU0lxk4ZeUvPUub25FbiEMHwD5lkS6NJuFcobTYG5raDvtk+gn0tbdDOr+Hf7hxNlQ6EIsw8Z+nKfAkcrbNa0YUxl5HQWeAV49bQB27lGt4P54GYKJEZYlADCK500D6F3wSbwbGA96rDBrGLAvqYC3uJULexjm/K2tS2Wp/pLZNHwr+etrmroZXiYg6yygYMWO4L6db4/ANpupEJJqBi6dNt7srwu0kZc96b/Es53g1sAfmtj2TA5KpI2htaqUPfRQvJZSaMwrZ3aHZbRPUW61hQJPvZ+4pfPP8DahY1dL/hPWK4LZL76XDRc= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(346002)(396003)(376002)(136003)(46966006)(36840700001)(110136005)(36756003)(7696005)(54906003)(82310400003)(82740400003)(36860700001)(4326008)(5660300002)(16526019)(316002)(7636003)(2906002)(55016002)(70586007)(478600001)(186003)(70206006)(36906005)(26005)(356005)(83380400001)(47076005)(426003)(8936002)(6666004)(2616005)(6636002)(86362001)(1076003)(336012)(8676002)(6286002)(34020700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jul 2021 01:47:42.1909 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 86d05b19-f404-40fb-c87b-08d944d709de X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4575 Subject: [dpdk-dev] [PATCH v5 10/26] net/mlx5: manage list cache entries release X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Matan Azrad When a cache entry is allocated by lcore A and is released by lcore B, the driver should synchronize the cache list access of lcore A. The design decision is to manage a counter per lcore cache that will be increased atomically when the non-original lcore decreases the reference counter of cache entry to 0. In list register operation, before the running lcore starts a lookup in its cache, it will check the counter in order to free invalid entries in its cache. Signed-off-by: Matan Azrad Acked-by: Suanming Mou --- drivers/net/mlx5/mlx5_utils.c | 79 +++++++++++++++++++++++------------ drivers/net/mlx5/mlx5_utils.h | 2 + 2 files changed, 54 insertions(+), 27 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index c4c9adb039..13c7dbe1c2 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -47,36 +47,25 @@ __list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) uint32_t ret; while (entry != NULL) { - struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); - - if (list->cb_match(list, entry, ctx)) { - if (lcore_index < RTE_MAX_LCORE) { + if (list->cb_match(list, entry, ctx) == 0) { + if (reuse) { + ret = __atomic_add_fetch(&entry->ref_cnt, 1, + __ATOMIC_ACQUIRE) - 1; + DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", + list->name, (void *)entry, + entry->ref_cnt); + } else if (lcore_index < RTE_MAX_LCORE) { ret = __atomic_load_n(&entry->ref_cnt, __ATOMIC_ACQUIRE); - if (ret == 0) { - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - } - } - entry = nentry; - continue; - } - if (reuse) { - ret = __atomic_add_fetch(&entry->ref_cnt, 1, - __ATOMIC_ACQUIRE); - if (ret == 1u) { - /* Entry was invalid before, free it. */ - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - entry = nentry; - continue; } - DRV_LOG(DEBUG, "mlx5 list %s entry %p ref++: %u.", - list->name, (void *)entry, entry->ref_cnt); + if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) + return entry; + if (reuse && ret == 0) + entry->ref_cnt--; /* Invalid entry. */ } - break; + entry = LIST_NEXT(entry, next); } - return entry; + return NULL; } struct mlx5_list_entry * @@ -105,10 +94,31 @@ mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, return NULL; lentry->ref_cnt = 1u; lentry->gentry = gentry; + lentry->lcore_idx = (uint32_t)lcore_index; LIST_INSERT_HEAD(&list->cache[lcore_index].h, lentry, next); return lentry; } +static void +__list_cache_clean(struct mlx5_list *list, int lcore_index) +{ + struct mlx5_list_cache *c = &list->cache[lcore_index]; + struct mlx5_list_entry *entry = LIST_FIRST(&c->h); + uint32_t inv_cnt = __atomic_exchange_n(&c->inv_cnt, 0, + __ATOMIC_RELAXED); + + while (inv_cnt != 0 && entry != NULL) { + struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); + + if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED) == 0) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + inv_cnt--; + } + entry = nentry; + } +} + struct mlx5_list_entry * mlx5_list_register(struct mlx5_list *list, void *ctx) { @@ -122,6 +132,8 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) rte_errno = ENOTSUP; return NULL; } + /* 0. Free entries that was invalidated by other lcores. */ + __list_cache_clean(list, lcore_index); /* 1. Lookup in local cache. */ local_entry = __list_lookup(list, lcore_index, ctx, true); if (local_entry) @@ -147,6 +159,7 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) entry->ref_cnt = 1u; local_entry->ref_cnt = 1u; local_entry->gentry = entry; + local_entry->lcore_idx = (uint32_t)lcore_index; rte_rwlock_write_lock(&list->lock); /* 4. Make sure the same entry was not created before the write lock. */ if (unlikely(prev_gen_cnt != list->gen_cnt)) { @@ -169,8 +182,8 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) rte_rwlock_write_unlock(&list->lock); LIST_INSERT_HEAD(&list->cache[lcore_index].h, local_entry, next); __atomic_add_fetch(&list->count, 1, __ATOMIC_ACQUIRE); - DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", - list->name, (void *)entry, entry->ref_cnt); + DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, + (void *)entry, entry->ref_cnt); return local_entry; } @@ -179,9 +192,21 @@ mlx5_list_unregister(struct mlx5_list *list, struct mlx5_list_entry *entry) { struct mlx5_list_entry *gentry = entry->gentry; + int lcore_idx; if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) return 1; + lcore_idx = rte_lcore_index(rte_lcore_id()); + MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); + if (entry->lcore_idx == (uint32_t)lcore_idx) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + } else if (likely(lcore_idx != -1)) { + __atomic_add_fetch(&list->cache[entry->lcore_idx].inv_cnt, 1, + __ATOMIC_RELAXED); + } else { + return 0; + } if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) return 1; rte_rwlock_write_lock(&list->lock); diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 6dade8238d..71da5ab4f9 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -311,11 +311,13 @@ struct mlx5_list; struct mlx5_list_entry { LIST_ENTRY(mlx5_list_entry) next; /* Entry pointers in the list. */ uint32_t ref_cnt; /* 0 means, entry is invalid. */ + uint32_t lcore_idx; struct mlx5_list_entry *gentry; }; struct mlx5_list_cache { LIST_HEAD(mlx5_list_head, mlx5_list_entry) h; + uint32_t inv_cnt; /* Invalid entries counter. */ } __rte_cache_aligned; /** -- 2.25.1