From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A3223A0C48; Tue, 13 Jul 2021 10:46:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 016FC41232; Tue, 13 Jul 2021 10:45:41 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2075.outbound.protection.outlook.com [40.107.243.75]) by mails.dpdk.org (Postfix) with ESMTP id 1950A4121D for ; Tue, 13 Jul 2021 10:45:38 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TKrCJCXMzvoHYnaGEK5v/c2jiOy1St4SlANcLMFL4Ey6N/mylbx04JEdFEjCN2FobTVPBagZd3py0itcjRFYaPUP1nmK7GCUg7KGHp+B1GwqLHH612SHyfBhIzGc1bBQ1UwPaagjTFoCxOQMfbWwgAqcP0Q64ljXBXfzAp++DUq6gACedtajDU8B8jMMvv+jdGHFqwZccIxRsc81i8C9KykphEx9KDbC7RI41Um4hTK0mDM3/VKKIF6cAiRaHzRqSde/HJ8YFdoxIbhgvEPKtVtjVGf4MX3yGZXqOi2h+KUxSncbOskbwP+pVSdJ2JDzpY6TjKgdCxGGhIQ2eKfZDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=W/9AWgT0InHapbcAu1hNVd8RMpfuFNRiliLTBQNX1v8=; b=MuHYt3upU+CRHkDPktn2xWAQQI9UvG0upEoRsuCGGUPAYlV/YOZ5oxXEi4Zd13yOtL4rzB2Y81JP05gdhEZlxzdxHh2Vgxr1DSydjjCfpT4If3LA/Le/UVoX3/l8hlFw6BFY7BOIZbXSKoinun2/QRGoLgn4HRfcTJ/0a+D9N8wjBxoPkD74YHAZyr5Ry5S39EYr6rMKQuBoDJScE6urRNLaPFIvTpwVG2d7PabYafpxHPQGdQMXbC0pMhH5vuYQXqm1z6/we6PHXLQ9pOt6m0QLN3YuUefzDzTCaiTUiHoC7yovG61pVosul7vmdUvROEXmZnd9mKYF0umbnDFKiQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=W/9AWgT0InHapbcAu1hNVd8RMpfuFNRiliLTBQNX1v8=; b=oiBPL9Tv0v2G/qx1ZD3K/WSrP7gNTpn/2dNgOjKI0geTTF8zDtan2/El5Sp1W8nSgD8YoBT3kdmD4YaUJv2eFGb94xPCPlNYkzhRzG6ftJtLkuIr96OuoxCwJi0LDw1pt7lTUtwWKSUwSz1I6+LbzV3eOkFKOePV1HS+JnDATtkTlTYeAdLkDWUWF9Vauoj4UpskmQKKBZ9uzxXco5BGuf0YYHMLuZOEr3yzxB59xV+kZAtZP2XHL67qLaD47yrvpz+uySAAEqMJa+2h3DoXtaamiL7od8lC8ssUoi/PxpE2MmY5FCNO3NDL4cQBW+nsKcOLBioWJygOoRKKtgb/fQ== Received: from DM5PR1401CA0020.namprd14.prod.outlook.com (2603:10b6:4:4a::30) by CY4PR12MB1365.namprd12.prod.outlook.com (2603:10b6:903:43::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.20; Tue, 13 Jul 2021 08:45:36 +0000 Received: from DM6NAM11FT009.eop-nam11.prod.protection.outlook.com (2603:10b6:4:4a:cafe::58) by DM5PR1401CA0020.outlook.office365.com (2603:10b6:4:4a::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.20 via Frontend Transport; Tue, 13 Jul 2021 08:45:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT009.mail.protection.outlook.com (10.13.173.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4308.20 via Frontend Transport; Tue, 13 Jul 2021 08:45:36 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 13 Jul 2021 08:45:34 +0000 From: Suanming Mou To: , CC: , , Date: Tue, 13 Jul 2021 11:44:44 +0300 Message-ID: <20210713084500.19964-11-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210713084500.19964-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210713084500.19964-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f61e3c07-d04c-4f73-1153-08d945da957e X-MS-TrafficTypeDiagnostic: CY4PR12MB1365: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:326; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: s7Vl5qobS5oW6Z5CnvrBCJYSCf4KlHqSW79K6zAnaWCIl7NIAwklPh3sH1DDYdcmZlIvLjP7Xyn8voKMJhuRt4PykAUvGJj2SZ5fLdbYpaC6NCNZg4ouRGIi5GA5lLsGfj8xRNUmQE9+C2hihzYzdkDFDp5RVRTFeGdmA/yGmqkwZgPBQXU6ldaGrZB/qtvshkRfCs+76UdV3Tuiu/IzsuAAs2zLPqhTO2z+jl536rVdZjOeWLqCdi8KtWZ7HKP7gKS9eS3oL53/2dlUiKCd0XYZmMhNnB3kF8dfOA83kBrGE75aA4f9FOUhun2BDHV5BsY8rUoW7kcb6RflgSvLj7LiRFieHMsESJ+PVAkfrl2ocVO8XCMa40yvjnE08q55RDQarJ8ir78N0aaJxLTkUefzT2pJL/0Tk84SUaG6yWN+mTBxleyiS1AIZMZOgQUzBrO+XHYpKhTVfr9RDkf13ODZGhr93+/Woi2JqwuoUOIur+S0aS4bEXlFeH61EW+4hSnVliOVUgS3TRpdCrh/r8tkeXT14uETIMmjNPYiRRKn5qrc0VhnGolyYIE8iDJrZ0eDy0f34MIF4mrzeJFnQSbag5wnRTb9WMpSLZ30IJCpucxtZMPD8+PNZbCbOFdT756AQ+4b7+9NoADYepLQeabggHJqoVGzMQFzujpUKaU/uxLVVt08zAdk6uOjofpm1zctOXjDwB5zRA3U9ArveDeIPEY7jLJp2NDm3xmo+iw= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(136003)(346002)(396003)(376002)(36840700001)(46966006)(70586007)(47076005)(426003)(82310400003)(70206006)(7696005)(86362001)(8936002)(186003)(356005)(36906005)(6666004)(16526019)(54906003)(1076003)(8676002)(4326008)(110136005)(36860700001)(6286002)(6636002)(55016002)(336012)(478600001)(34020700004)(5660300002)(7636003)(83380400001)(82740400003)(2616005)(26005)(36756003)(2906002)(316002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jul 2021 08:45:36.1181 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f61e3c07-d04c-4f73-1153-08d945da957e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT009.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1365 Subject: [dpdk-dev] [PATCH v6 10/26] net/mlx5: manage list cache entries release X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Matan Azrad When a cache entry is allocated by lcore A and is released by lcore B, the driver should synchronize the cache list access of lcore A. The design decision is to manage a counter per lcore cache that will be increased atomically when the non-original lcore decreases the reference counter of cache entry to 0. In list register operation, before the running lcore starts a lookup in its cache, it will check the counter in order to free invalid entries in its cache. Signed-off-by: Matan Azrad Acked-by: Suanming Mou --- drivers/net/mlx5/mlx5_utils.c | 79 +++++++++++++++++++++++------------ drivers/net/mlx5/mlx5_utils.h | 2 + 2 files changed, 54 insertions(+), 27 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index c4c9adb039..13c7dbe1c2 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -47,36 +47,25 @@ __list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) uint32_t ret; while (entry != NULL) { - struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); - - if (list->cb_match(list, entry, ctx)) { - if (lcore_index < RTE_MAX_LCORE) { + if (list->cb_match(list, entry, ctx) == 0) { + if (reuse) { + ret = __atomic_add_fetch(&entry->ref_cnt, 1, + __ATOMIC_ACQUIRE) - 1; + DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", + list->name, (void *)entry, + entry->ref_cnt); + } else if (lcore_index < RTE_MAX_LCORE) { ret = __atomic_load_n(&entry->ref_cnt, __ATOMIC_ACQUIRE); - if (ret == 0) { - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - } - } - entry = nentry; - continue; - } - if (reuse) { - ret = __atomic_add_fetch(&entry->ref_cnt, 1, - __ATOMIC_ACQUIRE); - if (ret == 1u) { - /* Entry was invalid before, free it. */ - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - entry = nentry; - continue; } - DRV_LOG(DEBUG, "mlx5 list %s entry %p ref++: %u.", - list->name, (void *)entry, entry->ref_cnt); + if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) + return entry; + if (reuse && ret == 0) + entry->ref_cnt--; /* Invalid entry. */ } - break; + entry = LIST_NEXT(entry, next); } - return entry; + return NULL; } struct mlx5_list_entry * @@ -105,10 +94,31 @@ mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, return NULL; lentry->ref_cnt = 1u; lentry->gentry = gentry; + lentry->lcore_idx = (uint32_t)lcore_index; LIST_INSERT_HEAD(&list->cache[lcore_index].h, lentry, next); return lentry; } +static void +__list_cache_clean(struct mlx5_list *list, int lcore_index) +{ + struct mlx5_list_cache *c = &list->cache[lcore_index]; + struct mlx5_list_entry *entry = LIST_FIRST(&c->h); + uint32_t inv_cnt = __atomic_exchange_n(&c->inv_cnt, 0, + __ATOMIC_RELAXED); + + while (inv_cnt != 0 && entry != NULL) { + struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); + + if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED) == 0) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + inv_cnt--; + } + entry = nentry; + } +} + struct mlx5_list_entry * mlx5_list_register(struct mlx5_list *list, void *ctx) { @@ -122,6 +132,8 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) rte_errno = ENOTSUP; return NULL; } + /* 0. Free entries that was invalidated by other lcores. */ + __list_cache_clean(list, lcore_index); /* 1. Lookup in local cache. */ local_entry = __list_lookup(list, lcore_index, ctx, true); if (local_entry) @@ -147,6 +159,7 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) entry->ref_cnt = 1u; local_entry->ref_cnt = 1u; local_entry->gentry = entry; + local_entry->lcore_idx = (uint32_t)lcore_index; rte_rwlock_write_lock(&list->lock); /* 4. Make sure the same entry was not created before the write lock. */ if (unlikely(prev_gen_cnt != list->gen_cnt)) { @@ -169,8 +182,8 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) rte_rwlock_write_unlock(&list->lock); LIST_INSERT_HEAD(&list->cache[lcore_index].h, local_entry, next); __atomic_add_fetch(&list->count, 1, __ATOMIC_ACQUIRE); - DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", - list->name, (void *)entry, entry->ref_cnt); + DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, + (void *)entry, entry->ref_cnt); return local_entry; } @@ -179,9 +192,21 @@ mlx5_list_unregister(struct mlx5_list *list, struct mlx5_list_entry *entry) { struct mlx5_list_entry *gentry = entry->gentry; + int lcore_idx; if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) return 1; + lcore_idx = rte_lcore_index(rte_lcore_id()); + MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); + if (entry->lcore_idx == (uint32_t)lcore_idx) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + } else if (likely(lcore_idx != -1)) { + __atomic_add_fetch(&list->cache[entry->lcore_idx].inv_cnt, 1, + __ATOMIC_RELAXED); + } else { + return 0; + } if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) return 1; rte_rwlock_write_lock(&list->lock); diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 6dade8238d..71da5ab4f9 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -311,11 +311,13 @@ struct mlx5_list; struct mlx5_list_entry { LIST_ENTRY(mlx5_list_entry) next; /* Entry pointers in the list. */ uint32_t ref_cnt; /* 0 means, entry is invalid. */ + uint32_t lcore_idx; struct mlx5_list_entry *gentry; }; struct mlx5_list_cache { LIST_HEAD(mlx5_list_head, mlx5_list_entry) h; + uint32_t inv_cnt; /* Invalid entries counter. */ } __rte_cache_aligned; /** -- 2.25.1