From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1A79CA0C41; Tue, 19 Oct 2021 22:58:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 078B6411A4; Tue, 19 Oct 2021 22:57:24 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2056.outbound.protection.outlook.com [40.107.223.56]) by mails.dpdk.org (Postfix) with ESMTP id D73A1411A2 for ; Tue, 19 Oct 2021 22:57:06 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dXClgZ25HVgzOxHqxjJpe+loWU/JZOxkJk4e/owUaAG821e1gE1qazxl0tLbJ1zF5DjI3U892jLaMe/5KvTb/r2rwR9wf4wJEhI0izt3zcFRKYH93EWr6OuysU9PFddduVMgp7kppRkAvt9ij0uIrx/Shy/9RYucyEiAkb5gVksxxC/h8WwSluuPSTkGvxo7TmlFGwpbkuuKgxYnT+pV0XMHwTJHRX2ib47wURG0qLBqpSASKo7+vzXATOz0I3oxFJhTSkbumUApfdG3nuUYGDWw35fOc9nG1OT18xU7sWbx5TtMXiocHCgp8i26eX8140WKVQHS1Qup3gU003TxQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=HRHHpGZhM2xwjAZmDPUneDXZ61+PF1qVOJuWhtUN7Q4=; b=hKNeyv68FISruHOqxHJcj3CODw3Qsy7kR33dxJVIPNcA+wv3ko5X7Mb1Y/Rz7PlKfiux3r1xy5FX0bOlHNYdj9w+2qlbh4trEhTRzVSc3+OOaIROrokGruOXyGcsq5kdLgeaAuL/tpWwlAfc4XMi5zJu1hYtt7ka3M1ROl00ovJZzOGVPPprAPY/UMX+4pFw/SYeXsW8rVZJ3jxIc1z9pdJZZjvU9YptKlDhDb/her2XSioNHTsAyMnMqcDf5ld/pierZBoic9KBP6wUFOz7kZDuuLGZ/bsnB190LN2B8tV/EnurgX7RFCPmUYVUEyMaZhpQu8AGPkNuMq5Ctfp/TQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HRHHpGZhM2xwjAZmDPUneDXZ61+PF1qVOJuWhtUN7Q4=; b=SQglkP5ByaRO0J+7147NnKsHT8mjN6cIOi/nNtAuWzG9LIU3gpRtSK50zMB9tIdyCu9c27BNbTFBs7C9D/qxAlqfnW6DrwzgBh550pXakHtTox56ZqSO/hNPjMZz0vYK22TrXlTF8Nd+gH3o0p3NEuJsVPvk4+3GxnWiEm8NxKQqFQ52WGDFAb481bXw+idg9ULkt9+WzQGCTaNwvmjmTrTeFhxHXfJZt2sP1WfxQe43KEHS1MNXCPqzCfdqU1OWXoWJIa4ueP98QWiRRoNgcVUEpyPmzJ0HFt1BMCfcwM2wSvtbUEMG+1yWB00cw93mmcnKVy0HIps0Rppcby0gPQ== Received: from BN0PR02CA0001.namprd02.prod.outlook.com (2603:10b6:408:e4::6) by MN2PR12MB3838.namprd12.prod.outlook.com (2603:10b6:208:16c::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.16; Tue, 19 Oct 2021 20:57:01 +0000 Received: from BN8NAM11FT033.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e4:cafe::6a) by BN0PR02CA0001.outlook.office365.com (2603:10b6:408:e4::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.15 via Frontend Transport; Tue, 19 Oct 2021 20:57:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT033.mail.protection.outlook.com (10.13.177.149) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4608.15 via Frontend Transport; Tue, 19 Oct 2021 20:57:00 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 19 Oct 2021 20:56:50 +0000 From: To: CC: Matan Azrad , Thomas Monjalon , Michael Baum Date: Tue, 19 Oct 2021 23:56:00 +0300 Message-ID: <20211019205602.3188203-17-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211019205602.3188203-1-michaelba@nvidia.com> References: <20211006220350.2357487-1-michaelba@nvidia.com> <20211019205602.3188203-1-michaelba@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: fc696b5a-539d-4f56-d2ca-08d99342ff79 X-MS-TrafficTypeDiagnostic: MN2PR12MB3838: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:298; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JH7nVvuJh6kcQLDb3WutVeQSKlfNwVsF/KQNkeguwYcHpTJL25mmrAj6LRf2bZUw2IhMcPLEuZIJ1/n48+qkOiBlO22eZMFN8O68LQQy/d29SGHNfSjj3P4WNkPyeaXau2oDC4Rc2WbgD6Y0VD2/ymhZBawU2YY1yHRHNqOx+6/tL4CzrfwtZ2LBbxGNKfeVzodgK4msfbX/t8gITSJzQGXM/vjYfpIOeLuRDCLAplXOuaHwHUfoJph2Yv6Gdyjd4d7T9yHZ0YDWWv+cLNxUc3mkgtuOBFGEiMNcal6uPzso0iNu8jvu9SlvNU+CyDEMVnJWD8IAQpEaWjwR+pYcIupAF3jDTPqcRl8hJukkHnMJkc/8Vue/d/3n1om8po/8tT24BDjOSBSCPjuGIc8PuklTyYPRf+B8uFdaTKxcp5xjXCCit7emYFX/q9ZQt/CuqKWWwT1xzr6zcJrdYvqhveuYuHQ1Hl7xftu9+InIR+3OGdr44vz5ewbA8xrr936aML6RIVEXXeKu+BROcsYf49oQzLdWOav8ojmjGiArmcxiPRwAhIWnMIqp+u2B7w+NWz1qDhVKQhlMzyrf46CSZafi6OapqhHUB82enQ5lFp3btsLZ+UgaTO7+0BHWL0VNLtCFaQejG+sqKpnAb+w6XQh9x8tBmvT0KeaV1ARwQMnBSDOKaSCoKQOuFQLUcY2rN28sIl0U4ig5cextVl9qUw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(70206006)(36756003)(1076003)(36906005)(2616005)(70586007)(7636003)(426003)(336012)(316002)(6916009)(30864003)(47076005)(4326008)(82310400003)(2906002)(36860700001)(83380400001)(5660300002)(55016002)(8676002)(7696005)(86362001)(8936002)(356005)(107886003)(186003)(508600001)(16526019)(54906003)(6666004)(2876002)(6286002)(26005)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Oct 2021 20:57:00.3609 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fc696b5a-539d-4f56-d2ca-08d99342ff79 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT033.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3838 Subject: [dpdk-dev] [PATCH v3 16/18] common/mlx5: share MR management X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Michael Baum Add global shared MR cache as a field of common device structure. Move MR management to use this global cache for all drivers. Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common.c | 54 ++++++++++++++++- drivers/common/mlx5/mlx5_common.h | 4 +- drivers/common/mlx5/mlx5_common_mr.c | 7 +-- drivers/common/mlx5/mlx5_common_mr.h | 4 -- drivers/common/mlx5/version.map | 4 -- drivers/compress/mlx5/mlx5_compress.c | 57 +----------------- drivers/crypto/mlx5/mlx5_crypto.c | 56 +---------------- drivers/crypto/mlx5/mlx5_crypto.h | 1 - drivers/net/mlx5/linux/mlx5_mp_os.c | 2 +- drivers/net/mlx5/linux/mlx5_os.c | 5 -- drivers/net/mlx5/mlx5.c | 36 ++--------- drivers/net/mlx5/mlx5.h | 3 - drivers/net/mlx5/mlx5_flow_aso.c | 28 ++++----- drivers/net/mlx5/mlx5_mr.c | 76 +++++++----------------- drivers/net/mlx5/mlx5_mr.h | 26 -------- drivers/net/mlx5/mlx5_rx.c | 1 - drivers/net/mlx5/mlx5_rx.h | 6 +- drivers/net/mlx5/mlx5_rxq.c | 4 +- drivers/net/mlx5/mlx5_rxtx.c | 1 - drivers/net/mlx5/mlx5_rxtx.h | 1 - drivers/net/mlx5/mlx5_rxtx_vec.h | 1 - drivers/net/mlx5/mlx5_trigger.c | 3 +- drivers/net/mlx5/mlx5_tx.c | 1 - drivers/net/mlx5/mlx5_tx.h | 1 - drivers/net/mlx5/mlx5_txq.c | 2 +- drivers/net/mlx5/windows/mlx5_os.c | 14 ----- drivers/regex/mlx5/mlx5_regex.c | 63 -------------------- drivers/regex/mlx5/mlx5_regex.h | 3 - drivers/regex/mlx5/mlx5_regex_control.c | 2 +- drivers/regex/mlx5/mlx5_regex_fastpath.c | 2 +- 30 files changed, 110 insertions(+), 358 deletions(-) delete mode 100644 drivers/net/mlx5/mlx5_mr.h diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c index 17a54acf1e..d6acf87493 100644 --- a/drivers/common/mlx5/mlx5_common.c +++ b/drivers/common/mlx5/mlx5_common.c @@ -308,6 +308,41 @@ mlx5_dev_to_pci_str(const struct rte_device *dev, char *addr, size_t size) #endif } +/** + * Callback for memory event. + * + * @param event_type + * Memory event type. + * @param addr + * Address of memory. + * @param len + * Size of memory. + */ +static void +mlx5_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, + size_t len, void *arg __rte_unused) +{ + struct mlx5_common_device *cdev; + + /* Must be called from the primary process. */ + MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); + switch (event_type) { + case RTE_MEM_EVENT_FREE: + pthread_mutex_lock(&devices_list_lock); + /* Iterate all the existing mlx5 devices. */ + TAILQ_FOREACH(cdev, &devices_list, next) + mlx5_free_mr_by_addr(&cdev->mr_scache, + mlx5_os_get_ctx_device_name + (cdev->ctx), + addr, len); + pthread_mutex_unlock(&devices_list_lock); + break; + case RTE_MEM_EVENT_ALLOC: + default: + break; + } +} + /** * Uninitialize all HW global of device context. * @@ -376,8 +411,13 @@ mlx5_common_dev_release(struct mlx5_common_device *cdev) pthread_mutex_lock(&devices_list_lock); TAILQ_REMOVE(&devices_list, cdev, next); pthread_mutex_unlock(&devices_list_lock); - if (rte_eal_process_type() == RTE_PROC_PRIMARY) + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (TAILQ_EMPTY(&devices_list)) + rte_mem_event_callback_unregister("MLX5_MEM_EVENT_CB", + NULL); + mlx5_mr_release_cache(&cdev->mr_scache); mlx5_dev_hw_global_release(cdev); + } rte_free(cdev); } @@ -412,6 +452,18 @@ mlx5_common_dev_create(struct rte_device *eal_dev, uint32_t classes) rte_free(cdev); return NULL; } + /* Initialize global MR cache resources and update its functions. */ + ret = mlx5_mr_create_cache(&cdev->mr_scache, eal_dev->numa_node); + if (ret) { + DRV_LOG(ERR, "Failed to initialize global MR share cache."); + mlx5_dev_hw_global_release(cdev); + rte_free(cdev); + return NULL; + } + /* Register callback function for global shared MR cache management. */ + if (TAILQ_EMPTY(&devices_list)) + rte_mem_event_callback_register("MLX5_MEM_EVENT_CB", + mlx5_mr_mem_event_cb, NULL); exit: pthread_mutex_lock(&devices_list_lock); TAILQ_INSERT_HEAD(&devices_list, cdev, next); diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index 8df4f32aa2..1a6b8c0f52 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -350,6 +350,7 @@ struct mlx5_common_device { void *ctx; /* Verbs/DV/DevX context. */ void *pd; /* Protection Domain. */ uint32_t pdn; /* Protection Domain Number. */ + struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ struct mlx5_common_dev_config config; /* Device configuration. */ }; @@ -453,8 +454,7 @@ mlx5_dev_is_pci(const struct rte_device *dev); __rte_internal uint32_t mlx5_mr_mb2mr(struct mlx5_common_device *cdev, struct mlx5_mp_id *mp_id, - struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mbuf, - struct mlx5_mr_share_cache *share_cache); + struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mbuf); /* mlx5_common_os.c */ diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c index 4de1c25f2a..d63e973b60 100644 --- a/drivers/common/mlx5/mlx5_common_mr.c +++ b/drivers/common/mlx5/mlx5_common_mr.c @@ -1848,16 +1848,13 @@ mlx5_mr_mempool2mr_bh(struct mlx5_mr_share_cache *share_cache, * Pointer to per-queue MR control structure. * @param mbuf * Pointer to mbuf. - * @param share_cache - * Pointer to a global shared MR cache. * * @return * Searched LKey on success, UINT32_MAX on no match. */ uint32_t mlx5_mr_mb2mr(struct mlx5_common_device *cdev, struct mlx5_mp_id *mp_id, - struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mbuf, - struct mlx5_mr_share_cache *share_cache) + struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mbuf) { uint32_t lkey; uintptr_t addr = (uintptr_t)mbuf->buf_addr; @@ -1871,6 +1868,6 @@ mlx5_mr_mb2mr(struct mlx5_common_device *cdev, struct mlx5_mp_id *mp_id, if (likely(lkey != UINT32_MAX)) return lkey; /* Take slower bottom-half on miss. */ - return mlx5_mr_addr2mr_bh(cdev->pd, mp_id, share_cache, mr_ctrl, + return mlx5_mr_addr2mr_bh(cdev->pd, mp_id, &cdev->mr_scache, mr_ctrl, addr, cdev->config.mr_ext_memseg_en); } diff --git a/drivers/common/mlx5/mlx5_common_mr.h b/drivers/common/mlx5/mlx5_common_mr.h index 36689dfb54..0bc3519fd9 100644 --- a/drivers/common/mlx5/mlx5_common_mr.h +++ b/drivers/common/mlx5/mlx5_common_mr.h @@ -140,9 +140,7 @@ __rte_internal uint32_t mlx5_mr_mempool2mr_bh(struct mlx5_mr_share_cache *share_cache, struct mlx5_mr_ctrl *mr_ctrl, struct rte_mempool *mp, uintptr_t addr); -__rte_internal void mlx5_mr_release_cache(struct mlx5_mr_share_cache *mr_cache); -__rte_internal int mlx5_mr_create_cache(struct mlx5_mr_share_cache *share_cache, int socket); __rte_internal void mlx5_mr_dump_cache(struct mlx5_mr_share_cache *share_cache __rte_unused); @@ -150,7 +148,6 @@ __rte_internal void mlx5_mr_rebuild_cache(struct mlx5_mr_share_cache *share_cache); __rte_internal void mlx5_mr_flush_local_cache(struct mlx5_mr_ctrl *mr_ctrl); -__rte_internal void mlx5_free_mr_by_addr(struct mlx5_mr_share_cache *share_cache, const char *ibdev_name, const void *addr, size_t len); __rte_internal @@ -183,7 +180,6 @@ __rte_internal void mlx5_common_verbs_dereg_mr(struct mlx5_pmd_mr *pmd_mr); -__rte_internal void mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb, mlx5_dereg_mr_t *dereg_mr_cb); diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index 292c5ede89..12128e4738 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -109,7 +109,6 @@ INTERNAL { mlx5_mr_addr2mr_bh; mlx5_mr_btree_dump; mlx5_mr_btree_free; - mlx5_mr_create_cache; mlx5_mr_create_primary; mlx5_mr_ctrl_init; mlx5_mr_dump_cache; @@ -119,9 +118,7 @@ INTERNAL { mlx5_mr_lookup_cache; mlx5_mr_lookup_list; mlx5_mr_mb2mr; - mlx5_free_mr_by_addr; mlx5_mr_rebuild_cache; - mlx5_mr_release_cache; mlx5_nl_allmulti; # WINDOWS_NO_EXPORT mlx5_nl_ifindex; # WINDOWS_NO_EXPORT @@ -139,7 +136,6 @@ INTERNAL { mlx5_os_umem_dereg; mlx5_os_umem_reg; - mlx5_os_set_reg_mr_cb; mlx5_realloc; diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index a5cec27894..f68800ff5d 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -43,7 +43,6 @@ struct mlx5_compress_priv { struct rte_compressdev_config dev_config; LIST_HEAD(xform_list, mlx5_compress_xform) xform_list; rte_spinlock_t xform_sl; - struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ volatile uint64_t *uar_addr; /* HCA caps*/ uint32_t mmo_decomp_sq:1; @@ -206,7 +205,7 @@ mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, return -rte_errno; } dev->data->queue_pairs[qp_id] = qp; - if (mlx5_mr_ctrl_init(&qp->mr_ctrl, &priv->mr_scache.dev_gen, + if (mlx5_mr_ctrl_init(&qp->mr_ctrl, &priv->cdev->mr_scache.dev_gen, priv->dev_config.socket_id)) { DRV_LOG(ERR, "Cannot allocate MR Btree for qp %u.", (uint32_t)qp_id); @@ -444,8 +443,7 @@ mlx5_compress_dseg_set(struct mlx5_compress_qp *qp, uintptr_t addr = rte_pktmbuf_mtod_offset(mbuf, uintptr_t, offset); dseg->bcount = rte_cpu_to_be_32(len); - dseg->lkey = mlx5_mr_mb2mr(qp->priv->cdev, 0, &qp->mr_ctrl, mbuf, - &qp->priv->mr_scache); + dseg->lkey = mlx5_mr_mb2mr(qp->priv->cdev, 0, &qp->mr_ctrl, mbuf); dseg->pbuf = rte_cpu_to_be_64(addr); return dseg->lkey; } @@ -679,41 +677,6 @@ mlx5_compress_uar_prepare(struct mlx5_compress_priv *priv) return 0; } -/** - * Callback for memory event. - * - * @param event_type - * Memory event type. - * @param addr - * Address of memory. - * @param len - * Size of memory. - */ -static void -mlx5_compress_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, - size_t len, void *arg __rte_unused) -{ - struct mlx5_compress_priv *priv; - - /* Must be called from the primary process. */ - MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); - switch (event_type) { - case RTE_MEM_EVENT_FREE: - pthread_mutex_lock(&priv_list_lock); - /* Iterate all the existing mlx5 devices. */ - TAILQ_FOREACH(priv, &mlx5_compress_priv_list, next) - mlx5_free_mr_by_addr(&priv->mr_scache, - mlx5_os_get_ctx_device_name - (priv->cdev->ctx), - addr, len); - pthread_mutex_unlock(&priv_list_lock); - break; - case RTE_MEM_EVENT_ALLOC: - default: - break; - } -} - static int mlx5_compress_dev_probe(struct mlx5_common_device *cdev) { @@ -765,18 +728,6 @@ mlx5_compress_dev_probe(struct mlx5_common_device *cdev) rte_compressdev_pmd_destroy(priv->compressdev); return -1; } - if (mlx5_mr_create_cache(&priv->mr_scache, rte_socket_id()) != 0) { - DRV_LOG(ERR, "Failed to allocate shared cache MR memory."); - mlx5_compress_uar_release(priv); - rte_compressdev_pmd_destroy(priv->compressdev); - rte_errno = ENOMEM; - return -rte_errno; - } - /* Register callback function for global shared MR cache management. */ - if (TAILQ_EMPTY(&mlx5_compress_priv_list)) - rte_mem_event_callback_register("MLX5_MEM_EVENT_CB", - mlx5_compress_mr_mem_event_cb, - NULL); pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&mlx5_compress_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); @@ -796,10 +747,6 @@ mlx5_compress_dev_remove(struct mlx5_common_device *cdev) TAILQ_REMOVE(&mlx5_compress_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); if (priv) { - if (TAILQ_EMPTY(&mlx5_compress_priv_list)) - rte_mem_event_callback_unregister("MLX5_MEM_EVENT_CB", - NULL); - mlx5_mr_release_cache(&priv->mr_scache); mlx5_compress_uar_release(priv); rte_compressdev_pmd_destroy(priv->compressdev); } diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index 1105d3fcd5..d857331225 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -316,8 +316,7 @@ mlx5_crypto_klm_set(struct mlx5_crypto_priv *priv, struct mlx5_crypto_qp *qp, *remain -= data_len; klm->bcount = rte_cpu_to_be_32(data_len); klm->pbuf = rte_cpu_to_be_64(addr); - klm->lkey = mlx5_mr_mb2mr(priv->cdev, 0, &qp->mr_ctrl, mbuf, - &priv->mr_scache); + klm->lkey = mlx5_mr_mb2mr(priv->cdev, 0, &qp->mr_ctrl, mbuf); return klm->lkey; } @@ -643,7 +642,7 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, DRV_LOG(ERR, "Failed to create QP."); goto error; } - if (mlx5_mr_ctrl_init(&qp->mr_ctrl, &priv->mr_scache.dev_gen, + if (mlx5_mr_ctrl_init(&qp->mr_ctrl, &priv->cdev->mr_scache.dev_gen, priv->dev_config.socket_id) != 0) { DRV_LOG(ERR, "Cannot allocate MR Btree for qp %u.", (uint32_t)qp_id); @@ -844,41 +843,6 @@ mlx5_crypto_parse_devargs(struct rte_devargs *devargs, return 0; } -/** - * Callback for memory event. - * - * @param event_type - * Memory event type. - * @param addr - * Address of memory. - * @param len - * Size of memory. - */ -static void -mlx5_crypto_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, - size_t len, void *arg __rte_unused) -{ - struct mlx5_crypto_priv *priv; - - /* Must be called from the primary process. */ - MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); - switch (event_type) { - case RTE_MEM_EVENT_FREE: - pthread_mutex_lock(&priv_list_lock); - /* Iterate all the existing mlx5 devices. */ - TAILQ_FOREACH(priv, &mlx5_crypto_priv_list, next) - mlx5_free_mr_by_addr(&priv->mr_scache, - mlx5_os_get_ctx_device_name - (priv->cdev->ctx), - addr, len); - pthread_mutex_unlock(&priv_list_lock); - break; - case RTE_MEM_EVENT_ALLOC: - default: - break; - } -} - static int mlx5_crypto_dev_probe(struct mlx5_common_device *cdev) { @@ -940,13 +904,6 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *cdev) rte_cryptodev_pmd_destroy(priv->crypto_dev); return -1; } - if (mlx5_mr_create_cache(&priv->mr_scache, rte_socket_id()) != 0) { - DRV_LOG(ERR, "Failed to allocate shared cache MR memory."); - mlx5_crypto_uar_release(priv); - rte_cryptodev_pmd_destroy(priv->crypto_dev); - rte_errno = ENOMEM; - return -rte_errno; - } priv->keytag = rte_cpu_to_be_64(devarg_prms.keytag); priv->max_segs_num = devarg_prms.max_segs_num; priv->umr_wqe_size = sizeof(struct mlx5_wqe_umr_bsf_seg) + @@ -960,11 +917,6 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *cdev) priv->wqe_set_size = priv->umr_wqe_size + rdmw_wqe_size; priv->umr_wqe_stride = priv->umr_wqe_size / MLX5_SEND_WQE_BB; priv->max_rdmar_ds = rdmw_wqe_size / sizeof(struct mlx5_wqe_dseg); - /* Register callback function for global shared MR cache management. */ - if (TAILQ_EMPTY(&mlx5_crypto_priv_list)) - rte_mem_event_callback_register("MLX5_MEM_EVENT_CB", - mlx5_crypto_mr_mem_event_cb, - NULL); pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&mlx5_crypto_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); @@ -984,10 +936,6 @@ mlx5_crypto_dev_remove(struct mlx5_common_device *cdev) TAILQ_REMOVE(&mlx5_crypto_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); if (priv) { - if (TAILQ_EMPTY(&mlx5_crypto_priv_list)) - rte_mem_event_callback_unregister("MLX5_MEM_EVENT_CB", - NULL); - mlx5_mr_release_cache(&priv->mr_scache); mlx5_crypto_uar_release(priv); rte_cryptodev_pmd_destroy(priv->crypto_dev); claim_zero(mlx5_devx_cmd_destroy(priv->login_obj)); diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index 030f369423..69cef81d77 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -26,7 +26,6 @@ struct mlx5_crypto_priv { uint32_t max_segs_num; /* Maximum supported data segs. */ struct mlx5_hlist *dek_hlist; /* Dek hash list. */ struct rte_cryptodev_config dev_config; - struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ struct mlx5_devx_obj *login_obj; uint64_t keytag; uint16_t wqe_set_size; diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c b/drivers/net/mlx5/linux/mlx5_mp_os.c index 286a7caf36..c3b6495d9e 100644 --- a/drivers/net/mlx5/linux/mlx5_mp_os.c +++ b/drivers/net/mlx5/linux/mlx5_mp_os.c @@ -91,7 +91,7 @@ mlx5_mp_os_primary_handle(const struct rte_mp_msg *mp_msg, const void *peer) case MLX5_MP_REQ_CREATE_MR: mp_init_msg(&priv->mp_id, &mp_res, param->type); lkey = mlx5_mr_create_primary(cdev->pd, - &priv->sh->share_cache, + &priv->sh->cdev->mr_scache, &entry, param->args.addr, cdev->config.mr_ext_memseg_en); if (lkey == UINT32_MAX) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 9e445f2f9b..61c4870d8c 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -44,7 +44,6 @@ #include "mlx5_rx.h" #include "mlx5_tx.h" #include "mlx5_autoconf.h" -#include "mlx5_mr.h" #include "mlx5_flow.h" #include "rte_pmd_mlx5.h" #include "mlx5_verbs.h" @@ -623,10 +622,6 @@ mlx5_init_once(void) case RTE_PROC_PRIMARY: if (sd->init_done) break; - LIST_INIT(&sd->mem_event_cb_list); - rte_rwlock_init(&sd->mem_event_rwlock); - rte_mem_event_callback_register("MLX5_MEM_EVENT_CB", - mlx5_mr_mem_event_cb, NULL); ret = mlx5_mp_init_primary(MLX5_MP_NAME, mlx5_mp_os_primary_handle); if (ret) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index a6c196b368..91aa5c0c75 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -36,7 +36,6 @@ #include "mlx5_rx.h" #include "mlx5_tx.h" #include "mlx5_autoconf.h" -#include "mlx5_mr.h" #include "mlx5_flow.h" #include "mlx5_flow_os.h" #include "rte_pmd_mlx5.h" @@ -1112,7 +1111,7 @@ mlx5_dev_ctx_shared_mempool_unregister(struct mlx5_dev_ctx_shared *sh, struct mlx5_mp_id mp_id; mlx5_mp_id_init(&mp_id, 0); - if (mlx5_mr_mempool_unregister(&sh->share_cache, mp, &mp_id) < 0) + if (mlx5_mr_mempool_unregister(&sh->cdev->mr_scache, mp, &mp_id) < 0) DRV_LOG(WARNING, "Failed to unregister mempool %s for PD %p: %s", mp->name, sh->cdev->pd, rte_strerror(rte_errno)); } @@ -1134,7 +1133,7 @@ mlx5_dev_ctx_shared_mempool_register_cb(struct rte_mempool *mp, void *arg) int ret; mlx5_mp_id_init(&mp_id, 0); - ret = mlx5_mr_mempool_register(&sh->share_cache, sh->cdev->pd, mp, + ret = mlx5_mr_mempool_register(&sh->cdev->mr_scache, sh->cdev->pd, mp, &mp_id); if (ret < 0 && rte_errno != EEXIST) DRV_LOG(ERR, "Failed to register existing mempool %s for PD %p: %s", @@ -1177,8 +1176,8 @@ mlx5_dev_ctx_shared_mempool_event_cb(enum rte_mempool_event event, switch (event) { case RTE_MEMPOOL_EVENT_READY: mlx5_mp_id_init(&mp_id, 0); - if (mlx5_mr_mempool_register(&sh->share_cache, sh->cdev->pd, mp, - &mp_id) < 0) + if (mlx5_mr_mempool_register(&sh->cdev->mr_scache, sh->cdev->pd, + mp, &mp_id) < 0) DRV_LOG(ERR, "Failed to register new mempool %s for PD %p: %s", mp->name, sh->cdev->pd, rte_strerror(rte_errno)); @@ -1342,20 +1341,6 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, for (i = 0; i < MLX5_UAR_PAGE_NUM_MAX; i++) rte_spinlock_init(&sh->uar_lock[i]); #endif - /* - * Once the device is added to the list of memory event - * callback, its global MR cache table cannot be expanded - * on the fly because of deadlock. If it overflows, lookup - * should be done by searching MR list linearly, which is slow. - * - * At this point the device is not added to the memory - * event list yet, context is just being created. - */ - err = mlx5_mr_create_cache(&sh->share_cache, sh->numa_node); - if (err) { - err = rte_errno; - goto error; - } mlx5_os_dev_shared_handler_install(sh); sh->cnt_id_tbl = mlx5_l3t_create(MLX5_L3T_TYPE_DWORD); if (!sh->cnt_id_tbl) { @@ -1370,11 +1355,6 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, mlx5_flow_aging_init(sh); mlx5_flow_counters_mng_init(sh); mlx5_flow_ipool_create(sh, config); - /* Add device to memory callback list. */ - rte_rwlock_write_lock(&mlx5_shared_data->mem_event_rwlock); - LIST_INSERT_HEAD(&mlx5_shared_data->mem_event_cb_list, - sh, mem_event_cb); - rte_rwlock_write_unlock(&mlx5_shared_data->mem_event_rwlock); /* Add context to the global device list. */ LIST_INSERT_HEAD(&mlx5_dev_ctx_list, sh, next); rte_spinlock_init(&sh->geneve_tlv_opt_sl); @@ -1387,8 +1367,6 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, MLX5_ASSERT(sh); if (sh->cnt_id_tbl) mlx5_l3t_destroy(sh->cnt_id_tbl); - if (sh->share_cache.cache.table) - mlx5_mr_btree_free(&sh->share_cache.cache); if (sh->tis) claim_zero(mlx5_devx_cmd_destroy(sh->tis)); if (sh->td) @@ -1444,12 +1422,6 @@ mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) if (ret == 0) rte_mempool_walk(mlx5_dev_ctx_shared_mempool_unregister_cb, sh); - /* Remove from memory callback device list. */ - rte_rwlock_write_lock(&mlx5_shared_data->mem_event_rwlock); - LIST_REMOVE(sh, mem_event_cb); - rte_rwlock_write_unlock(&mlx5_shared_data->mem_event_rwlock); - /* Release created Memory Regions. */ - mlx5_mr_release_cache(&sh->share_cache); /* Remove context from the global device list. */ LIST_REMOVE(sh, next); /* Release flow workspaces objects on the last device. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 5c25b94f36..4f823baa6d 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1142,9 +1142,6 @@ struct mlx5_dev_ctx_shared { char ibdev_path[MLX5_FS_PATH_MAX]; /* SYSFS dev path for secondary */ struct mlx5_dev_attr device_attr; /* Device properties. */ int numa_node; /* Numa node of backing physical device. */ - LIST_ENTRY(mlx5_dev_ctx_shared) mem_event_cb; - /**< Called by memory event callback. */ - struct mlx5_mr_share_cache share_cache; /* Packet pacing related structure. */ struct mlx5_dev_txpp txpp; /* Shared DV/DR flow data section. */ diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c index 8f3d2ffc2c..1fc1000b01 100644 --- a/drivers/net/mlx5/mlx5_flow_aso.c +++ b/drivers/net/mlx5/mlx5_flow_aso.c @@ -60,17 +60,17 @@ mlx5_aso_cq_create(void *ctx, struct mlx5_aso_cq *cq, uint16_t log_desc_n, /** * Free MR resources. * - * @param[in] sh - * Pointer to shared device context. + * @param[in] cdev + * Pointer to the mlx5 common device. * @param[in] mr * MR to free. */ static void -mlx5_aso_dereg_mr(struct mlx5_dev_ctx_shared *sh, struct mlx5_pmd_mr *mr) +mlx5_aso_dereg_mr(struct mlx5_common_device *cdev, struct mlx5_pmd_mr *mr) { void *addr = mr->addr; - sh->share_cache.dereg_mr_cb(mr); + cdev->mr_scache.dereg_mr_cb(mr); mlx5_free(addr); memset(mr, 0, sizeof(*mr)); } @@ -78,8 +78,8 @@ mlx5_aso_dereg_mr(struct mlx5_dev_ctx_shared *sh, struct mlx5_pmd_mr *mr) /** * Register Memory Region. * - * @param[in] sh - * Pointer to shared device context. + * @param[in] cdev + * Pointer to the mlx5 common device. * @param[in] length * Size of MR buffer. * @param[in/out] mr @@ -91,7 +91,7 @@ mlx5_aso_dereg_mr(struct mlx5_dev_ctx_shared *sh, struct mlx5_pmd_mr *mr) * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_aso_reg_mr(struct mlx5_dev_ctx_shared *sh, size_t length, +mlx5_aso_reg_mr(struct mlx5_common_device *cdev, size_t length, struct mlx5_pmd_mr *mr, int socket) { @@ -103,7 +103,7 @@ mlx5_aso_reg_mr(struct mlx5_dev_ctx_shared *sh, size_t length, DRV_LOG(ERR, "Failed to create ASO bits mem for MR."); return -1; } - ret = sh->share_cache.reg_mr_cb(sh->cdev->pd, mr->addr, length, mr); + ret = cdev->mr_scache.reg_mr_cb(cdev->pd, mr->addr, length, mr); if (ret) { DRV_LOG(ERR, "Failed to create direct Mkey."); mlx5_free(mr->addr); @@ -313,14 +313,14 @@ mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh, switch (aso_opc_mod) { case ASO_OPC_MOD_FLOW_HIT: - if (mlx5_aso_reg_mr(sh, (MLX5_ASO_AGE_ACTIONS_PER_POOL / 8) * + if (mlx5_aso_reg_mr(cdev, (MLX5_ASO_AGE_ACTIONS_PER_POOL / 8) * sq_desc_n, &sh->aso_age_mng->aso_sq.mr, 0)) return -1; if (mlx5_aso_sq_create(cdev->ctx, &sh->aso_age_mng->aso_sq, 0, sh->tx_uar, cdev->pdn, MLX5_ASO_QUEUE_LOG_DESC, cdev->config.hca_attr.sq_ts_format)) { - mlx5_aso_dereg_mr(sh, &sh->aso_age_mng->aso_sq.mr); + mlx5_aso_dereg_mr(cdev, &sh->aso_age_mng->aso_sq.mr); return -1; } mlx5_aso_age_init_sq(&sh->aso_age_mng->aso_sq); @@ -335,14 +335,14 @@ mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh, break; case ASO_OPC_MOD_CONNECTION_TRACKING: /* 64B per object for query. */ - if (mlx5_aso_reg_mr(sh, 64 * sq_desc_n, + if (mlx5_aso_reg_mr(cdev, 64 * sq_desc_n, &sh->ct_mng->aso_sq.mr, 0)) return -1; if (mlx5_aso_sq_create(cdev->ctx, &sh->ct_mng->aso_sq, 0, sh->tx_uar, cdev->pdn, MLX5_ASO_QUEUE_LOG_DESC, cdev->config.hca_attr.sq_ts_format)) { - mlx5_aso_dereg_mr(sh, &sh->ct_mng->aso_sq.mr); + mlx5_aso_dereg_mr(cdev, &sh->ct_mng->aso_sq.mr); return -1; } mlx5_aso_ct_init_sq(&sh->ct_mng->aso_sq); @@ -370,14 +370,14 @@ mlx5_aso_queue_uninit(struct mlx5_dev_ctx_shared *sh, switch (aso_opc_mod) { case ASO_OPC_MOD_FLOW_HIT: - mlx5_aso_dereg_mr(sh, &sh->aso_age_mng->aso_sq.mr); + mlx5_aso_dereg_mr(sh->cdev, &sh->aso_age_mng->aso_sq.mr); sq = &sh->aso_age_mng->aso_sq; break; case ASO_OPC_MOD_POLICER: sq = &sh->mtrmng->pools_mng.sq; break; case ASO_OPC_MOD_CONNECTION_TRACKING: - mlx5_aso_dereg_mr(sh, &sh->ct_mng->aso_sq.mr); + mlx5_aso_dereg_mr(sh->cdev, &sh->ct_mng->aso_sq.mr); sq = &sh->ct_mng->aso_sq; break; default: diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c index 9ce973d95c..38780202dc 100644 --- a/drivers/net/mlx5/mlx5_mr.c +++ b/drivers/net/mlx5/mlx5_mr.c @@ -12,46 +12,10 @@ #include #include "mlx5.h" -#include "mlx5_mr.h" #include "mlx5_rxtx.h" #include "mlx5_rx.h" #include "mlx5_tx.h" -/** - * Callback for memory event. This can be called from both primary and secondary - * process. - * - * @param event_type - * Memory event type. - * @param addr - * Address of memory. - * @param len - * Size of memory. - */ -void -mlx5_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, - size_t len, void *arg __rte_unused) -{ - struct mlx5_dev_ctx_shared *sh; - struct mlx5_dev_list *dev_list = &mlx5_shared_data->mem_event_cb_list; - - /* Must be called from the primary process. */ - MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); - switch (event_type) { - case RTE_MEM_EVENT_FREE: - rte_rwlock_write_lock(&mlx5_shared_data->mem_event_rwlock); - /* Iterate all the existing mlx5 devices. */ - LIST_FOREACH(sh, dev_list, mem_event_cb) - mlx5_free_mr_by_addr(&sh->share_cache, - sh->ibdev_name, addr, len); - rte_rwlock_write_unlock(&mlx5_shared_data->mem_event_rwlock); - break; - case RTE_MEM_EVENT_ALLOC: - default: - break; - } -} - /** * Bottom-half of LKey search on Tx. * @@ -72,7 +36,7 @@ mlx5_tx_addr2mr_bh(struct mlx5_txq_data *txq, uintptr_t addr) struct mlx5_priv *priv = txq_ctrl->priv; return mlx5_mr_addr2mr_bh(priv->sh->cdev->pd, &priv->mp_id, - &priv->sh->share_cache, mr_ctrl, addr, + &priv->sh->cdev->mr_scache, mr_ctrl, addr, priv->sh->cdev->config.mr_ext_memseg_en); } @@ -110,7 +74,7 @@ mlx5_tx_mb2mr_bh(struct mlx5_txq_data *txq, struct rte_mbuf *mb) mp = buf->mp; } if (mp != NULL) { - lkey = mlx5_mr_mempool2mr_bh(&priv->sh->share_cache, + lkey = mlx5_mr_mempool2mr_bh(&priv->sh->cdev->mr_scache, mr_ctrl, mp, addr); /* * Lookup can only fail on invalid input, e.g. "addr" @@ -169,7 +133,7 @@ mlx5_net_dma_map(struct rte_device *rte_dev, void *addr, struct rte_eth_dev *dev; struct mlx5_mr *mr; struct mlx5_priv *priv; - struct mlx5_dev_ctx_shared *sh; + struct mlx5_common_device *cdev; dev = dev_to_eth_dev(rte_dev); if (!dev) { @@ -179,20 +143,20 @@ mlx5_net_dma_map(struct rte_device *rte_dev, void *addr, return -1; } priv = dev->data->dev_private; - sh = priv->sh; - mr = mlx5_create_mr_ext(sh->cdev->pd, (uintptr_t)addr, len, - SOCKET_ID_ANY, sh->share_cache.reg_mr_cb); + cdev = priv->sh->cdev; + mr = mlx5_create_mr_ext(cdev->pd, (uintptr_t)addr, len, + SOCKET_ID_ANY, cdev->mr_scache.reg_mr_cb); if (!mr) { DRV_LOG(WARNING, "port %u unable to dma map", dev->data->port_id); rte_errno = EINVAL; return -1; } - rte_rwlock_write_lock(&sh->share_cache.rwlock); - LIST_INSERT_HEAD(&sh->share_cache.mr_list, mr, mr); + rte_rwlock_write_lock(&cdev->mr_scache.rwlock); + LIST_INSERT_HEAD(&cdev->mr_scache.mr_list, mr, mr); /* Insert to the global cache table. */ - mlx5_mr_insert_cache(&sh->share_cache, mr); - rte_rwlock_write_unlock(&sh->share_cache.rwlock); + mlx5_mr_insert_cache(&cdev->mr_scache, mr); + rte_rwlock_write_unlock(&cdev->mr_scache.rwlock); return 0; } @@ -217,7 +181,7 @@ mlx5_net_dma_unmap(struct rte_device *rte_dev, void *addr, { struct rte_eth_dev *dev; struct mlx5_priv *priv; - struct mlx5_dev_ctx_shared *sh; + struct mlx5_common_device *cdev; struct mlx5_mr *mr; struct mr_cache_entry entry; @@ -229,11 +193,11 @@ mlx5_net_dma_unmap(struct rte_device *rte_dev, void *addr, return -1; } priv = dev->data->dev_private; - sh = priv->sh; - rte_rwlock_write_lock(&sh->share_cache.rwlock); - mr = mlx5_mr_lookup_list(&sh->share_cache, &entry, (uintptr_t)addr); + cdev = priv->sh->cdev; + rte_rwlock_write_lock(&cdev->mr_scache.rwlock); + mr = mlx5_mr_lookup_list(&cdev->mr_scache, &entry, (uintptr_t)addr); if (!mr) { - rte_rwlock_write_unlock(&sh->share_cache.rwlock); + rte_rwlock_write_unlock(&cdev->mr_scache.rwlock); DRV_LOG(WARNING, "address 0x%" PRIxPTR " wasn't registered to device %s", (uintptr_t)addr, rte_dev->name); rte_errno = EINVAL; @@ -242,16 +206,16 @@ mlx5_net_dma_unmap(struct rte_device *rte_dev, void *addr, LIST_REMOVE(mr, mr); DRV_LOG(DEBUG, "port %u remove MR(%p) from list", dev->data->port_id, (void *)mr); - mlx5_mr_free(mr, sh->share_cache.dereg_mr_cb); - mlx5_mr_rebuild_cache(&sh->share_cache); + mlx5_mr_free(mr, cdev->mr_scache.dereg_mr_cb); + mlx5_mr_rebuild_cache(&cdev->mr_scache); /* * No explicit wmb is needed after updating dev_gen due to * store-release ordering in unlock that provides the * implicit barrier at the software visible level. */ - ++sh->share_cache.dev_gen; + ++cdev->mr_scache.dev_gen; DRV_LOG(DEBUG, "broadcasting local cache flush, gen=%d", - sh->share_cache.dev_gen); - rte_rwlock_write_unlock(&sh->share_cache.rwlock); + cdev->mr_scache.dev_gen); + rte_rwlock_write_unlock(&cdev->mr_scache.rwlock); return 0; } diff --git a/drivers/net/mlx5/mlx5_mr.h b/drivers/net/mlx5/mlx5_mr.h deleted file mode 100644 index c984e777b5..0000000000 --- a/drivers/net/mlx5/mlx5_mr.h +++ /dev/null @@ -1,26 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright 2018 6WIND S.A. - * Copyright 2018 Mellanox Technologies, Ltd - */ - -#ifndef RTE_PMD_MLX5_MR_H_ -#define RTE_PMD_MLX5_MR_H_ - -#include -#include -#include - -#include -#include -#include -#include - -#include - -/* First entry must be NULL for comparison. */ -#define mlx5_mr_btree_len(bt) ((bt)->len - 1) - -void mlx5_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, - size_t len, void *arg); - -#endif /* RTE_PMD_MLX5_MR_H_ */ diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index e3b1051ba4..c83c7f4a39 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -22,7 +22,6 @@ #include "mlx5_autoconf.h" #include "mlx5_defs.h" #include "mlx5.h" -#include "mlx5_mr.h" #include "mlx5_utils.h" #include "mlx5_rxtx.h" #include "mlx5_rx.h" diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 1b00076fe7..11e4330935 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -18,11 +18,13 @@ #include "mlx5.h" #include "mlx5_autoconf.h" -#include "mlx5_mr.h" /* Support tunnel matching. */ #define MLX5_FLOW_TUNNEL 10 +/* First entry must be NULL for comparison. */ +#define mlx5_mr_btree_len(bt) ((bt)->len - 1) + struct mlx5_rxq_stats { #ifdef MLX5_PMD_SOFT_COUNTERS uint64_t ipackets; /**< Total of successfully received packets. */ @@ -309,7 +311,7 @@ mlx5_rx_addr2mr(struct mlx5_rxq_data *rxq, uintptr_t addr) */ rxq_ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); mp = mlx5_rxq_mprq_enabled(rxq) ? rxq->mprq_mp : rxq->mp; - return mlx5_mr_mempool2mr_bh(&rxq_ctrl->priv->sh->share_cache, + return mlx5_mr_mempool2mr_bh(&rxq_ctrl->priv->sh->cdev->mr_scache, mr_ctrl, mp, addr); } diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 53c8c5439d..b866cbfa20 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1242,7 +1242,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) rte_errno = ENOMEM; return -rte_errno; } - ret = mlx5_mr_mempool_register(&priv->sh->share_cache, + ret = mlx5_mr_mempool_register(&priv->sh->cdev->mr_scache, priv->sh->cdev->pd, mp, &priv->mp_id); if (ret < 0 && rte_errno != EEXIST) { ret = rte_errno; @@ -1450,7 +1450,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, } tmpl->type = MLX5_RXQ_TYPE_STANDARD; if (mlx5_mr_ctrl_init(&tmpl->rxq.mr_ctrl, - &priv->sh->share_cache.dev_gen, socket)) { + &priv->sh->cdev->mr_scache.dev_gen, socket)) { /* rte_errno is already set. */ goto error; } diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index 7b984eff35..ed1f2d2c8c 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -22,7 +22,6 @@ #include "mlx5_autoconf.h" #include "mlx5_defs.h" #include "mlx5.h" -#include "mlx5_mr.h" #include "mlx5_utils.h" #include "mlx5_rxtx.h" #include "mlx5_rx.h" diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index ad1144e218..b400295e7d 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -24,7 +24,6 @@ #include "mlx5_utils.h" #include "mlx5.h" #include "mlx5_autoconf.h" -#include "mlx5_mr.h" struct mlx5_priv; diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h index 93b4f517bb..1aec72817e 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec.h @@ -12,7 +12,6 @@ #include #include "mlx5_autoconf.h" -#include "mlx5_mr.h" /* HW checksum offload capabilities of vectorized Tx. */ #define MLX5_VEC_TX_CKSUM_OFFLOAD_CAP \ diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index cf4fbd3c9f..54c2893437 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -14,7 +14,6 @@ #include #include "mlx5.h" -#include "mlx5_mr.h" #include "mlx5_rx.h" #include "mlx5_tx.h" #include "mlx5_utils.h" @@ -148,7 +147,7 @@ mlx5_rxq_mempool_register(struct mlx5_rxq_ctrl *rxq_ctrl) } for (s = 0; s < rxq_ctrl->rxq.rxseg_n; s++) { mp = rxq_ctrl->rxq.rxseg[s].mp; - ret = mlx5_mr_mempool_register(&priv->sh->share_cache, + ret = mlx5_mr_mempool_register(&priv->sh->cdev->mr_scache, priv->sh->cdev->pd, mp, &priv->mp_id); if (ret < 0 && rte_errno != EEXIST) diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c index df671379e4..2cc9ae6772 100644 --- a/drivers/net/mlx5/mlx5_tx.c +++ b/drivers/net/mlx5/mlx5_tx.c @@ -22,7 +22,6 @@ #include "mlx5_autoconf.h" #include "mlx5_defs.h" #include "mlx5.h" -#include "mlx5_mr.h" #include "mlx5_utils.h" #include "mlx5_rxtx.h" #include "mlx5_tx.h" diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index cdbcf659df..bab9008d9b 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -18,7 +18,6 @@ #include "mlx5.h" #include "mlx5_autoconf.h" -#include "mlx5_mr.h" /* TX burst subroutines return codes. */ enum mlx5_txcmp_code { diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index f12510712a..dee3e4a279 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -1118,7 +1118,7 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, return NULL; } if (mlx5_mr_ctrl_init(&tmpl->txq.mr_ctrl, - &priv->sh->share_cache.dev_gen, socket)) { + &priv->sh->cdev->mr_scache.dev_gen, socket)) { /* rte_errno is already set. */ goto error; } diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index c3d4b90946..afdfff8b36 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -26,7 +26,6 @@ #include "mlx5_rx.h" #include "mlx5_tx.h" #include "mlx5_autoconf.h" -#include "mlx5_mr.h" #include "mlx5_flow.h" #include "mlx5_devx.h" @@ -122,21 +121,8 @@ mlx5_init_shared_data(void) static int mlx5_init_once(void) { - struct mlx5_shared_data *sd; - if (mlx5_init_shared_data()) return -rte_errno; - sd = mlx5_shared_data; - rte_spinlock_lock(&sd->lock); - MLX5_ASSERT(sd); - if (!sd->init_done) { - LIST_INIT(&sd->mem_event_cb_list); - rte_rwlock_init(&sd->mem_event_rwlock); - rte_mem_event_callback_register("MLX5_MEM_EVENT_CB", - mlx5_mr_mem_event_cb, NULL); - sd->init_done = true; - } - rte_spinlock_unlock(&sd->lock); return 0; } diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c index b39181ebb5..7f900b67ee 100644 --- a/drivers/regex/mlx5/mlx5_regex.c +++ b/drivers/regex/mlx5/mlx5_regex.c @@ -25,10 +25,6 @@ int mlx5_regex_logtype; -TAILQ_HEAD(regex_mem_event, mlx5_regex_priv) mlx5_mem_event_list = - TAILQ_HEAD_INITIALIZER(mlx5_mem_event_list); -static pthread_mutex_t mem_event_list_lock = PTHREAD_MUTEX_INITIALIZER; - const struct rte_regexdev_ops mlx5_regexdev_ops = { .dev_info_get = mlx5_regex_info_get, .dev_configure = mlx5_regex_configure, @@ -86,41 +82,6 @@ mlx5_regex_get_name(char *name, struct rte_device *dev) sprintf(name, "mlx5_regex_%s", dev->name); } -/** - * Callback for memory event. - * - * @param event_type - * Memory event type. - * @param addr - * Address of memory. - * @param len - * Size of memory. - */ -static void -mlx5_regex_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, - size_t len, void *arg __rte_unused) -{ - struct mlx5_regex_priv *priv; - - /* Must be called from the primary process. */ - MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); - switch (event_type) { - case RTE_MEM_EVENT_FREE: - pthread_mutex_lock(&mem_event_list_lock); - /* Iterate all the existing mlx5 devices. */ - TAILQ_FOREACH(priv, &mlx5_mem_event_list, mem_event_cb) - mlx5_free_mr_by_addr(&priv->mr_scache, - mlx5_os_get_ctx_device_name - (priv->cdev->ctx), - addr, len); - pthread_mutex_unlock(&mem_event_list_lock); - break; - case RTE_MEM_EVENT_ALLOC: - default: - break; - } -} - static int mlx5_regex_dev_probe(struct mlx5_common_device *cdev) { @@ -194,21 +155,6 @@ mlx5_regex_dev_probe(struct mlx5_common_device *cdev) priv->regexdev->device = cdev->dev; priv->regexdev->data->dev_private = priv; priv->regexdev->state = RTE_REGEXDEV_READY; - ret = mlx5_mr_create_cache(&priv->mr_scache, rte_socket_id()); - if (ret) { - DRV_LOG(ERR, "MR init tree failed."); - rte_errno = ENOMEM; - goto error; - } - /* Register callback function for global shared MR cache management. */ - if (TAILQ_EMPTY(&mlx5_mem_event_list)) - rte_mem_event_callback_register("MLX5_MEM_EVENT_CB", - mlx5_regex_mr_mem_event_cb, - NULL); - /* Add device to memory callback list. */ - pthread_mutex_lock(&mem_event_list_lock); - TAILQ_INSERT_TAIL(&mlx5_mem_event_list, priv, mem_event_cb); - pthread_mutex_unlock(&mem_event_list_lock); DRV_LOG(INFO, "RegEx GGA is %s.", priv->has_umr ? "supported" : "unsupported"); return 0; @@ -237,15 +183,6 @@ mlx5_regex_dev_remove(struct mlx5_common_device *cdev) return 0; priv = dev->data->dev_private; if (priv) { - /* Remove from memory callback device list. */ - pthread_mutex_lock(&mem_event_list_lock); - TAILQ_REMOVE(&mlx5_mem_event_list, priv, mem_event_cb); - pthread_mutex_unlock(&mem_event_list_lock); - if (TAILQ_EMPTY(&mlx5_mem_event_list)) - rte_mem_event_callback_unregister("MLX5_MEM_EVENT_CB", - NULL); - if (priv->mr_scache.cache.table) - mlx5_mr_release_cache(&priv->mr_scache); if (priv->uar) mlx5_glue->devx_free_uar(priv->uar); if (priv->regexdev) diff --git a/drivers/regex/mlx5/mlx5_regex.h b/drivers/regex/mlx5/mlx5_regex.h index be81931b3a..eb59cc38a6 100644 --- a/drivers/regex/mlx5/mlx5_regex.h +++ b/drivers/regex/mlx5/mlx5_regex.h @@ -68,9 +68,6 @@ struct mlx5_regex_priv { MLX5_RXP_EM_COUNT]; uint32_t nb_engines; /* Number of RegEx engines. */ struct mlx5dv_devx_uar *uar; /* UAR object. */ - TAILQ_ENTRY(mlx5_regex_priv) mem_event_cb; - /**< Called by memory event callback. */ - struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ uint8_t is_bf2; /* The device is BF2 device. */ uint8_t has_umr; /* The device supports UMR. */ uint32_t mmo_regex_qp_cap:1; diff --git a/drivers/regex/mlx5/mlx5_regex_control.c b/drivers/regex/mlx5/mlx5_regex_control.c index 6735e51976..50c966a022 100644 --- a/drivers/regex/mlx5/mlx5_regex_control.c +++ b/drivers/regex/mlx5/mlx5_regex_control.c @@ -242,7 +242,7 @@ mlx5_regex_qp_setup(struct rte_regexdev *dev, uint16_t qp_ind, nb_sq_config++; } - ret = mlx5_mr_ctrl_init(&qp->mr_ctrl, &priv->mr_scache.dev_gen, + ret = mlx5_mr_ctrl_init(&qp->mr_ctrl, &priv->cdev->mr_scache.dev_gen, rte_socket_id()); if (ret) { DRV_LOG(ERR, "Error setting up mr btree"); diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c index 8817e2e074..adb5343a46 100644 --- a/drivers/regex/mlx5/mlx5_regex_fastpath.c +++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c @@ -126,7 +126,7 @@ static inline uint32_t mlx5_regex_mb2mr(struct mlx5_regex_priv *priv, struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mbuf) { - return mlx5_mr_mb2mr(priv->cdev, 0, mr_ctrl, mbuf, &priv->mr_scache); + return mlx5_mr_mb2mr(priv->cdev, 0, mr_ctrl, mbuf); } static inline void -- 2.25.1