From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 347B2A0C47; Tue, 6 Jul 2021 15:34:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 457AB41413; Tue, 6 Jul 2021 15:33:42 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam07on2067.outbound.protection.outlook.com [40.107.95.67]) by mails.dpdk.org (Postfix) with ESMTP id 4DF75413D1 for ; Tue, 6 Jul 2021 15:33:40 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VJM90N3IFilzwPnhW7AhnXcMX3fn1yvpHgL6uN4LvEpofct2B9I3kWLBnw82jkIWYMG/9ccsbtn4Umy1BI39ozNWdYTaRNLvvS1hS5SQXgE6JJ6zj1L65N+qrcB8yI8Y09uaUsTqbNQHxq92RxjkCa6cy0sRVXcMRKXFaCYL35r6MVwEM2k+WQGbp+mSpSSKHFLKRHHFm+xfKaVWSQWx23Mca82ipGgPFcDhwmRF0jw34H2OuAzYs5924/UNUgOPnpBWDTWFJvu2P8DkjBZIN8euuP3b2+6+i2MOtBnOgDCSBbZoeeZ0QDYc79aapfhAbnqNxjNWwbU71Cw0aAtV8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=W/9AWgT0InHapbcAu1hNVd8RMpfuFNRiliLTBQNX1v8=; b=i7qJi+k1EDtuYpnbRkhwfDA6j6hA6u2uKqnQpD7iL5MlkyLJTHwZpMoq2Fm4fvJ6/NoOBtSruFATCWtI3AG7xtIv53ZULPRGt+QSCJroTkJkukXDwBj8HCHFKDJqvb3FTuz79Vn4MNJM4TUBNxK6iMnzyyX2h5l1fxucSm0GgQVRRZNxRFBAQMecXUBHyovPsvSicuBJZMxhAdphQ81iScjVMj91yO37hIq75dC9nAx3nB/EGwzjB2RK9AUGKPx96+S9gWdtPFv0C/X3SA0sWUJrQKqcrae/fRfW4N1XJPhDnRPutd6IVNAEx4ed8LPLrvFR/MXgoAFxOq1izdOPGg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=W/9AWgT0InHapbcAu1hNVd8RMpfuFNRiliLTBQNX1v8=; b=QB7wUJdVW3SLX8DsWTtqfnmWix8Et4Sx+FmnZA3Im5mcOFdV+7LaELm5MmzQl8Fn6yfyn30C4YJmZaIorEzrhUXTHFS0ZEDorrUKeThff+R4m0VXxrz6B9XidSk897KmkMRut+kwzGILfzDpgIRmLikFsE7iZ+q3ofGexjnNrVmW6ETwx5j1kMKWH8hVsVy4QBPs/FE+618OYssdhnjvdomn8v31YEqYs38q643j3V5q03ctMMQWQ75uw1zjaQicf6uc6dPP/q7sq4+qXLXO/QAz+xyf6WZlFNQeVBs4sCNZRXMXjvB19sQ0atULDVnoyjCtmrjmySIc5ldqShvpeA== Received: from BN0PR02CA0038.namprd02.prod.outlook.com (2603:10b6:408:e5::13) by BY5PR12MB3842.namprd12.prod.outlook.com (2603:10b6:a03:1ab::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.31; Tue, 6 Jul 2021 13:33:37 +0000 Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e5:cafe::a) by BN0PR02CA0038.outlook.office365.com (2603:10b6:408:e5::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23 via Frontend Transport; Tue, 6 Jul 2021 13:33:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 13:33:36 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 6 Jul 2021 13:33:34 +0000 From: Suanming Mou To: , CC: , , Date: Tue, 6 Jul 2021 16:32:41 +0300 Message-ID: <20210706133257.3353-11-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210706133257.3353-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210706133257.3353-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4a9be085-4041-47ee-e4ce-08d94082a881 X-MS-TrafficTypeDiagnostic: BY5PR12MB3842: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:326; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: C/pj88MwZ5cOx4eNfzeW8xs3Y4DOVCoiWITwOm01nVk1UL5dj9UqMX+iIVxFKrzdcvieICiD6DzCOAb8MSRNK4Jfpc99pnqZdsF0NAHWrfx0RXA1U3yqS/zFdUHs5ny9KR3GT/YlugkBC2Z3NtAYQgoOeXBVUS/F/+UZobkyiydJde77MJP9LOPNb+LsxPKwaY1bYBHVZgth3n5nGn5skkBTVfNk+b/FGXTLMYRDFKEQYUxZ5mL+rssG0+vkHqTWRgcNKK316mBTKgxwZU3D2ALAbI9J8LfHnyhRtiquVhuZnGSNXVF8apH491gPqMmNQNoj/FM55EV+qYGX5e9s6vpbAUmdwR6cdeEaUEP3Rnxtl0sQuig24vNmFbrH766X1572MV8apNXdYnJuJqZatRYDVov3iLEo0QgmfbKPvjvxDb78gNPhjl8isICkKS8G2/SDW7j5QDb0U2Lo44rdZ+NjAJqxegxXcZhegKtVdvxKwnCSmyHKJVgf2ua7w5hI92y+MggIyY7kf6P5tnOrPmlxkb5f6rfmwrF6Gi322Us6eNxWF8NOrmpTMqfHIK069uTvoQOdFWaPJvqpUyDzgXKPvtNOljVzJU89O+59cKbn9El8UU5u2cpPEeKeGktdb586YE30tq22yIofQZTG7NoQr2J9GRX6pM+T55QnEDMmiEl8gcdk8ebCcJVCSe2USk0ko3S/EqalRSyJAor/SQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(376002)(396003)(39860400002)(346002)(36840700001)(46966006)(6666004)(8936002)(16526019)(2906002)(316002)(6286002)(1076003)(6636002)(70206006)(70586007)(26005)(55016002)(8676002)(36906005)(478600001)(186003)(5660300002)(54906003)(7696005)(4326008)(356005)(110136005)(336012)(47076005)(82740400003)(36860700001)(36756003)(82310400003)(83380400001)(7636003)(426003)(86362001)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 13:33:36.4917 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4a9be085-4041-47ee-e4ce-08d94082a881 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB3842 Subject: [dpdk-dev] [PATCH v4 10/26] net/mlx5: manage list cache entries release X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Matan Azrad When a cache entry is allocated by lcore A and is released by lcore B, the driver should synchronize the cache list access of lcore A. The design decision is to manage a counter per lcore cache that will be increased atomically when the non-original lcore decreases the reference counter of cache entry to 0. In list register operation, before the running lcore starts a lookup in its cache, it will check the counter in order to free invalid entries in its cache. Signed-off-by: Matan Azrad Acked-by: Suanming Mou --- drivers/net/mlx5/mlx5_utils.c | 79 +++++++++++++++++++++++------------ drivers/net/mlx5/mlx5_utils.h | 2 + 2 files changed, 54 insertions(+), 27 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index c4c9adb039..13c7dbe1c2 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -47,36 +47,25 @@ __list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) uint32_t ret; while (entry != NULL) { - struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); - - if (list->cb_match(list, entry, ctx)) { - if (lcore_index < RTE_MAX_LCORE) { + if (list->cb_match(list, entry, ctx) == 0) { + if (reuse) { + ret = __atomic_add_fetch(&entry->ref_cnt, 1, + __ATOMIC_ACQUIRE) - 1; + DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", + list->name, (void *)entry, + entry->ref_cnt); + } else if (lcore_index < RTE_MAX_LCORE) { ret = __atomic_load_n(&entry->ref_cnt, __ATOMIC_ACQUIRE); - if (ret == 0) { - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - } - } - entry = nentry; - continue; - } - if (reuse) { - ret = __atomic_add_fetch(&entry->ref_cnt, 1, - __ATOMIC_ACQUIRE); - if (ret == 1u) { - /* Entry was invalid before, free it. */ - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - entry = nentry; - continue; } - DRV_LOG(DEBUG, "mlx5 list %s entry %p ref++: %u.", - list->name, (void *)entry, entry->ref_cnt); + if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) + return entry; + if (reuse && ret == 0) + entry->ref_cnt--; /* Invalid entry. */ } - break; + entry = LIST_NEXT(entry, next); } - return entry; + return NULL; } struct mlx5_list_entry * @@ -105,10 +94,31 @@ mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, return NULL; lentry->ref_cnt = 1u; lentry->gentry = gentry; + lentry->lcore_idx = (uint32_t)lcore_index; LIST_INSERT_HEAD(&list->cache[lcore_index].h, lentry, next); return lentry; } +static void +__list_cache_clean(struct mlx5_list *list, int lcore_index) +{ + struct mlx5_list_cache *c = &list->cache[lcore_index]; + struct mlx5_list_entry *entry = LIST_FIRST(&c->h); + uint32_t inv_cnt = __atomic_exchange_n(&c->inv_cnt, 0, + __ATOMIC_RELAXED); + + while (inv_cnt != 0 && entry != NULL) { + struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); + + if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED) == 0) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + inv_cnt--; + } + entry = nentry; + } +} + struct mlx5_list_entry * mlx5_list_register(struct mlx5_list *list, void *ctx) { @@ -122,6 +132,8 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) rte_errno = ENOTSUP; return NULL; } + /* 0. Free entries that was invalidated by other lcores. */ + __list_cache_clean(list, lcore_index); /* 1. Lookup in local cache. */ local_entry = __list_lookup(list, lcore_index, ctx, true); if (local_entry) @@ -147,6 +159,7 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) entry->ref_cnt = 1u; local_entry->ref_cnt = 1u; local_entry->gentry = entry; + local_entry->lcore_idx = (uint32_t)lcore_index; rte_rwlock_write_lock(&list->lock); /* 4. Make sure the same entry was not created before the write lock. */ if (unlikely(prev_gen_cnt != list->gen_cnt)) { @@ -169,8 +182,8 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) rte_rwlock_write_unlock(&list->lock); LIST_INSERT_HEAD(&list->cache[lcore_index].h, local_entry, next); __atomic_add_fetch(&list->count, 1, __ATOMIC_ACQUIRE); - DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", - list->name, (void *)entry, entry->ref_cnt); + DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, + (void *)entry, entry->ref_cnt); return local_entry; } @@ -179,9 +192,21 @@ mlx5_list_unregister(struct mlx5_list *list, struct mlx5_list_entry *entry) { struct mlx5_list_entry *gentry = entry->gentry; + int lcore_idx; if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) return 1; + lcore_idx = rte_lcore_index(rte_lcore_id()); + MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); + if (entry->lcore_idx == (uint32_t)lcore_idx) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + } else if (likely(lcore_idx != -1)) { + __atomic_add_fetch(&list->cache[entry->lcore_idx].inv_cnt, 1, + __ATOMIC_RELAXED); + } else { + return 0; + } if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) return 1; rte_rwlock_write_lock(&list->lock); diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 6dade8238d..71da5ab4f9 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -311,11 +311,13 @@ struct mlx5_list; struct mlx5_list_entry { LIST_ENTRY(mlx5_list_entry) next; /* Entry pointers in the list. */ uint32_t ref_cnt; /* 0 means, entry is invalid. */ + uint32_t lcore_idx; struct mlx5_list_entry *gentry; }; struct mlx5_list_cache { LIST_HEAD(mlx5_list_head, mlx5_list_entry) h; + uint32_t inv_cnt; /* Invalid entries counter. */ } __rte_cache_aligned; /** -- 2.25.1