From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F3E1AA0C53; Wed, 3 Nov 2021 11:17:50 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 69C7D411DB; Wed, 3 Nov 2021 11:17:40 +0100 (CET) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2051.outbound.protection.outlook.com [40.107.100.51]) by mails.dpdk.org (Postfix) with ESMTP id 366BE41183 for ; Wed, 3 Nov 2021 11:17:39 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=extoHarhT9mGj4ugCXJ5KbY1NRrjIIDgy4TVjib8h/47K6bvdG3AmrI3bnvWfoUTu+vA7hT1RsQ13zdUPeQg/DPbeDZz8VReYaHjM61gqP5Bo3v9YnegrTGtHmipoCJ1geMKmIX1h3x6DpZh9QdWFaT8a6kqbvJzVvYO7kXo2FRetRf6Osh0ZpigOIGKl63UIBEj6GScYSSNXIL/luMGbMZT2cPd8Wi+/XttUXwDf0E5lTgJh4iZcn5S0wIArU9dOj8pSI8oa7Psbg7Su9ITM3/jRWk06sds2p2PJDFxBvmM+4A9BK26GR3UpExrYNt8iAV65C5tdbOhoyrc8nkevg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JcUBvXfOLlSVbO9Vc4MLhe2j6mbpd1QiDOBSwkTj+kM=; b=IwhpMuKThl5R4REymGWHE3C0YUx4b2PC9rRSdFQbGmtpy4PdZ7Y0vNQrKVwMsLqE0csWwW1Z4PeLSJFxn4BLijgE48uNQeYWJKWrkiIg4aSR3S1/oUxGriLJFNdmwNch+HLQIXuS1TIan5PklXH5y7w6bFayxhcPh9KBTUAvYak0WpgZxeFp5Bs+KC97OxKZWkfZuCacQc5JlxYyeTAM9nocfnHvO3ICTaCsC8bQPT/1cVk9mpJtCf2zXd7ThyMajnLtyAjWSBk9Czp+qNbbYxRVDMiUEt631WEWJF8m6hkSi3xopU8gOumPDRok0JR811UVCLjWHSiWctObBmuATQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JcUBvXfOLlSVbO9Vc4MLhe2j6mbpd1QiDOBSwkTj+kM=; b=ZdEH8DVYxog1qdx5pHfWc8V5eUb/vHLd55InPSVq/m770ofxpOBs3NhGCeDw0a3/+Vgp0P33mCbN5p9YkNG7KG+5PqunZjW8WRo/AdvfqXCKrwTqlWTfY7yUqQV/ShHgYtevMvAo7YmkJIgRdw8dnFRGuMSj2p+EqSgOCn9p22wYZRwGncijbNd7wRoV/s7h8g8aHAqrI0NKm8TFR9xVucP5sf948QxYoEVYityHo1e9DhaTA4YfppLrRB5QEAIlOm3xJ9g5xWqv+BlVvuo9RIXpS7NVbqVqrcNIyudW8j3YEgl/ZhUsCB2sp9Aum8mk4/qC0CZQF4E+B7hgAIcf7g== Received: from BN6PR11CA0051.namprd11.prod.outlook.com (2603:10b6:404:f7::13) by BL0PR12MB4740.namprd12.prod.outlook.com (2603:10b6:208:84::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.19; Wed, 3 Nov 2021 10:17:37 +0000 Received: from BN8NAM11FT055.eop-nam11.prod.protection.outlook.com (2603:10b6:404:f7:cafe::60) by BN6PR11CA0051.outlook.office365.com (2603:10b6:404:f7::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15 via Frontend Transport; Wed, 3 Nov 2021 10:17:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT055.mail.protection.outlook.com (10.13.177.62) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Wed, 3 Nov 2021 10:17:36 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 3 Nov 2021 10:17:34 +0000 From: To: CC: Matan Azrad , Thomas Monjalon , Michael Baum , Viacheslav Ovsiienko , Dmitry Kozlyuk Date: Wed, 3 Nov 2021 12:17:06 +0200 Message-ID: <20211103101707.1418097-3-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211103101707.1418097-1-michaelba@nvidia.com> References: <20211103101707.1418097-1-michaelba@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e3a2d670-7b60-43ed-59d0-08d99eb328b9 X-MS-TrafficTypeDiagnostic: BL0PR12MB4740: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2803; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 7hH6lWGXuS4Gzh/dwSdWhju49IE/lg8znP/8irob32L24ME2sJ0lIrvUvrv13q0TdOlaXdnHRJjjevSiijmHYoPNW22RpyAm0PJP+Et21K0yBKedBUxjeb8noBdjnE8/p69YRynD1AgVOlkFQGYdpUUJTcsaNwiRPLhT8cPcka3B3rQIbkX2HpkHuBPRB5BCexpd9T3GQauUvmp8rXUw7Mla715ZIkhUnWCUvOM7x6cpbNpGBnH1+yGUNGFrs53fC9KHDfRhiWkT5yxyiC6bd2zT9NjD+2ogoYXt78KDK++YnYttlWrPImcLna18C+BWjO0ZkcEUl618jjgIy7Cc5UzW7yj9/UXooeUWb5McUBx5hENTcWGbdUtJ3Qt9LssTYT5JGu7SViKpYhDOwnvk/+PPKmSNJPixq3p8bAT7WMfi9/sNbtcjR8RnULeh3BFlKkTr02S+eIrHKxBIwJJwbPcVEfRBmgloXGdHA5PL4nMOUBk08FZ4G3WcLFfbgQ5BFEk+95qlwhdE8Df80sBw5wVZ24I85EpRGr0gRTugbfe0eVUu9WFnNUD+C9oNgV8qs4T/bURE3rdTOKsV4jtnqIWCy/TQC8LF2B5msqhOiaB8GlqFPYKEQXBdLc96LHWdmEDcAVx9cddSbZhp6bMKRmVEHx0AJBt4kYvKENj2XO2y++J2vP/xHezE/9jwDEq1kxjcKS/tXV21ltZsxwfBXA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(336012)(55016002)(8676002)(70206006)(8936002)(47076005)(82310400003)(16526019)(36756003)(70586007)(186003)(36860700001)(2906002)(2876002)(4326008)(426003)(6286002)(36906005)(356005)(107886003)(7636003)(5660300002)(1076003)(7696005)(6916009)(83380400001)(86362001)(6666004)(316002)(30864003)(26005)(2616005)(54906003)(508600001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Nov 2021 10:17:36.6627 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e3a2d670-7b60-43ed-59d0-08d99eb328b9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT055.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB4740 Subject: [dpdk-dev] [PATCH 2/3] common/mlx5: fix redundant parameter in search MR function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Michael Baum Memory region management has recently been shared between drivers, including the search for caches in the data plane. The initial search in the local linear cache of the queue, usually yields a result and one should not continue searching in the next level caches. The function that searches in the local cache gets the pointer to a device as a parameter, that is not necessary for its operation but for subsequent searches (which, as mentioned, usually do not happen). Transferring the device to a function and maintaining it, takes some time and causes some impact on performance. Add the pointer to the device as a field of the mr_ctrl structure. The field will be updated during control path and will be used only when needed in the search. Fixes: fc59a1ec556b ("common/mlx5: share MR mempool registration") Signed-off-by: Michael Baum Reviewed-by: Viacheslav Ovsiienko Reviewed-by: Dmitry Kozlyuk Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common_mr.c | 14 +++++++----- drivers/common/mlx5/mlx5_common_mr.h | 28 ++++++++++------------- drivers/compress/mlx5/mlx5_compress.c | 4 ++-- drivers/crypto/mlx5/mlx5_crypto.c | 24 +++++++++----------- drivers/net/mlx5/mlx5_rx.h | 10 ++------ drivers/net/mlx5/mlx5_rxq.c | 3 +-- drivers/net/mlx5/mlx5_tx.h | 3 +-- drivers/net/mlx5/mlx5_txq.c | 3 +-- drivers/regex/mlx5/mlx5_regex_control.c | 3 +-- drivers/regex/mlx5/mlx5_regex_fastpath.c | 29 ++++-------------------- 10 files changed, 43 insertions(+), 78 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c index 903ed0652c..003d358f96 100644 --- a/drivers/common/mlx5/mlx5_common_mr.c +++ b/drivers/common/mlx5/mlx5_common_mr.c @@ -292,8 +292,8 @@ mlx5_mr_btree_dump(struct mlx5_mr_btree *bt __rte_unused) * * @param mr_ctrl * Pointer to MR control structure. - * @param dev_gen_ptr - * Pointer to generation number of global cache. + * @param cdev + * Pointer to the mlx5 device structure. * @param socket * NUMA socket on which memory must be allocated. * @@ -301,15 +301,16 @@ mlx5_mr_btree_dump(struct mlx5_mr_btree *bt __rte_unused) * 0 on success, a negative errno value otherwise and rte_errno is set. */ int -mlx5_mr_ctrl_init(struct mlx5_mr_ctrl *mr_ctrl, uint32_t *dev_gen_ptr, +mlx5_mr_ctrl_init(struct mlx5_mr_ctrl *mr_ctrl, struct mlx5_common_device *cdev, int socket) { if (mr_ctrl == NULL) { rte_errno = EINVAL; return -rte_errno; } + mr_ctrl->cdev = cdev; /* Save pointer of global generation number to check memory event. */ - mr_ctrl->dev_gen_ptr = dev_gen_ptr; + mr_ctrl->dev_gen_ptr = &cdev->mr_scache.dev_gen; /* Initialize B-tree and allocate memory for bottom-half cache table. */ return mlx5_mr_btree_init(&mr_ctrl->cache_bh, MLX5_MR_BTREE_CACHE_N, socket); @@ -1860,11 +1861,12 @@ mlx5_mr_mempool2mr_bh(struct mlx5_mr_share_cache *share_cache, } uint32_t -mlx5_mr_mb2mr_bh(struct mlx5_common_device *cdev, struct mlx5_mp_id *mp_id, - struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mb) +mlx5_mr_mb2mr_bh(struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mb, + struct mlx5_mp_id *mp_id) { uint32_t lkey; uintptr_t addr = (uintptr_t)mb->buf_addr; + struct mlx5_common_device *cdev = mr_ctrl->cdev; if (cdev->config.mr_mempool_reg_en) { struct rte_mempool *mp = NULL; diff --git a/drivers/common/mlx5/mlx5_common_mr.h b/drivers/common/mlx5/mlx5_common_mr.h index 8771c7d02b..f65974b8a9 100644 --- a/drivers/common/mlx5/mlx5_common_mr.h +++ b/drivers/common/mlx5/mlx5_common_mr.h @@ -66,6 +66,7 @@ struct mlx5_common_device; /* Per-queue MR control descriptor. */ struct mlx5_mr_ctrl { + struct mlx5_common_device *cdev; /* Pointer to the mlx5 common device.*/ uint32_t *dev_gen_ptr; /* Generation number of device to poll. */ uint32_t cur_gen; /* Generation number saved to flush caches. */ uint16_t mru; /* Index of last hit entry in top-half cache. */ @@ -169,41 +170,36 @@ void mlx5_mr_flush_local_cache(struct mlx5_mr_ctrl *mr_ctrl); * Bottom-half of LKey search on. If supported, lookup for the address from * the mempool. Otherwise, search in old mechanism caches. * - * @param cdev - * Pointer to mlx5 device. - * @param mp_id - * Multi-process identifier, may be NULL for the primary process. * @param mr_ctrl * Pointer to per-queue MR control structure. * @param mb * Pointer to mbuf. + * @param mp_id + * Multi-process identifier, may be NULL for the primary process. * * @return * Searched LKey on success, UINT32_MAX on no match. */ __rte_internal -uint32_t mlx5_mr_mb2mr_bh(struct mlx5_common_device *cdev, - struct mlx5_mp_id *mp_id, - struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mb); +uint32_t mlx5_mr_mb2mr_bh(struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mbuf, + struct mlx5_mp_id *mp_id); /** * Query LKey from a packet buffer. * - * @param cdev - * Pointer to the mlx5 device structure. - * @param mp_id - * Multi-process identifier, may be NULL for the primary process. * @param mr_ctrl * Pointer to per-queue MR control structure. * @param mbuf * Pointer to mbuf. + * @param mp_id + * Multi-process identifier, may be NULL for the primary process. * * @return * Searched LKey on success, UINT32_MAX on no match. */ static __rte_always_inline uint32_t -mlx5_mr_mb2mr(struct mlx5_common_device *cdev, struct mlx5_mp_id *mp_id, - struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mbuf) +mlx5_mr_mb2mr(struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mbuf, + struct mlx5_mp_id *mp_id) { uint32_t lkey; @@ -216,14 +212,14 @@ mlx5_mr_mb2mr(struct mlx5_common_device *cdev, struct mlx5_mp_id *mp_id, if (likely(lkey != UINT32_MAX)) return lkey; /* Take slower bottom-half on miss. */ - return mlx5_mr_mb2mr_bh(cdev, mp_id, mr_ctrl, mbuf); + return mlx5_mr_mb2mr_bh(mr_ctrl, mbuf, mp_id); } /* mlx5_common_mr.c */ __rte_internal -int mlx5_mr_ctrl_init(struct mlx5_mr_ctrl *mr_ctrl, uint32_t *dev_gen_ptr, - int socket); +int mlx5_mr_ctrl_init(struct mlx5_mr_ctrl *mr_ctrl, + struct mlx5_common_device *cdev, int socket); __rte_internal void mlx5_mr_btree_free(struct mlx5_mr_btree *bt); void mlx5_mr_btree_dump(struct mlx5_mr_btree *bt __rte_unused); diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index c4081c5f7d..5cf6d647af 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -205,7 +205,7 @@ mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, return -rte_errno; } dev->data->queue_pairs[qp_id] = qp; - if (mlx5_mr_ctrl_init(&qp->mr_ctrl, &priv->cdev->mr_scache.dev_gen, + if (mlx5_mr_ctrl_init(&qp->mr_ctrl, priv->cdev, priv->dev_config.socket_id)) { DRV_LOG(ERR, "Cannot allocate MR Btree for qp %u.", (uint32_t)qp_id); @@ -471,7 +471,7 @@ mlx5_compress_dseg_set(struct mlx5_compress_qp *qp, uintptr_t addr = rte_pktmbuf_mtod_offset(mbuf, uintptr_t, offset); dseg->bcount = rte_cpu_to_be_32(len); - dseg->lkey = mlx5_mr_mb2mr(qp->priv->cdev, 0, &qp->mr_ctrl, mbuf); + dseg->lkey = mlx5_mr_mb2mr(&qp->mr_ctrl, mbuf, 0); dseg->pbuf = rte_cpu_to_be_64(addr); return dseg->lkey; } diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index f430d8cde0..1740dba003 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -305,9 +305,9 @@ mlx5_crypto_get_block_size(struct rte_crypto_op *op) } static __rte_always_inline uint32_t -mlx5_crypto_klm_set(struct mlx5_crypto_priv *priv, struct mlx5_crypto_qp *qp, - struct rte_mbuf *mbuf, struct mlx5_wqe_dseg *klm, - uint32_t offset, uint32_t *remain) +mlx5_crypto_klm_set(struct mlx5_crypto_qp *qp, struct rte_mbuf *mbuf, + struct mlx5_wqe_dseg *klm, uint32_t offset, + uint32_t *remain) { uint32_t data_len = (rte_pktmbuf_data_len(mbuf) - offset); uintptr_t addr = rte_pktmbuf_mtod_offset(mbuf, uintptr_t, offset); @@ -317,22 +317,21 @@ mlx5_crypto_klm_set(struct mlx5_crypto_priv *priv, struct mlx5_crypto_qp *qp, *remain -= data_len; klm->bcount = rte_cpu_to_be_32(data_len); klm->pbuf = rte_cpu_to_be_64(addr); - klm->lkey = mlx5_mr_mb2mr(priv->cdev, 0, &qp->mr_ctrl, mbuf); + klm->lkey = mlx5_mr_mb2mr(&qp->mr_ctrl, mbuf, 0); return klm->lkey; } static __rte_always_inline uint32_t -mlx5_crypto_klms_set(struct mlx5_crypto_priv *priv, struct mlx5_crypto_qp *qp, - struct rte_crypto_op *op, struct rte_mbuf *mbuf, - struct mlx5_wqe_dseg *klm) +mlx5_crypto_klms_set(struct mlx5_crypto_qp *qp, struct rte_crypto_op *op, + struct rte_mbuf *mbuf, struct mlx5_wqe_dseg *klm) { uint32_t remain_len = op->sym->cipher.data.length; uint32_t nb_segs = mbuf->nb_segs; uint32_t klm_n = 1u; /* First mbuf needs to take the cipher offset. */ - if (unlikely(mlx5_crypto_klm_set(priv, qp, mbuf, klm, + if (unlikely(mlx5_crypto_klm_set(qp, mbuf, klm, op->sym->cipher.data.offset, &remain_len) == UINT32_MAX)) { op->status = RTE_CRYPTO_OP_STATUS_ERROR; return 0; @@ -344,7 +343,7 @@ mlx5_crypto_klms_set(struct mlx5_crypto_priv *priv, struct mlx5_crypto_qp *qp, op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; return 0; } - if (unlikely(mlx5_crypto_klm_set(priv, qp, mbuf, ++klm, 0, + if (unlikely(mlx5_crypto_klm_set(qp, mbuf, ++klm, 0, &remain_len) == UINT32_MAX)) { op->status = RTE_CRYPTO_OP_STATUS_ERROR; return 0; @@ -370,7 +369,7 @@ mlx5_crypto_wqe_set(struct mlx5_crypto_priv *priv, uint32_t ds; bool ipl = op->sym->m_dst == NULL || op->sym->m_dst == op->sym->m_src; /* Set UMR WQE. */ - uint32_t klm_n = mlx5_crypto_klms_set(priv, qp, op, + uint32_t klm_n = mlx5_crypto_klms_set(qp, op, ipl ? op->sym->m_src : op->sym->m_dst, klms); if (unlikely(klm_n == 0)) @@ -396,8 +395,7 @@ mlx5_crypto_wqe_set(struct mlx5_crypto_priv *priv, cseg = RTE_PTR_ADD(cseg, priv->umr_wqe_size); klms = RTE_PTR_ADD(cseg, sizeof(struct mlx5_rdma_write_wqe)); if (!ipl) { - klm_n = mlx5_crypto_klms_set(priv, qp, op, op->sym->m_src, - klms); + klm_n = mlx5_crypto_klms_set(qp, op, op->sym->m_src, klms); if (unlikely(klm_n == 0)) return 0; } else { @@ -643,7 +641,7 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, DRV_LOG(ERR, "Failed to create QP."); goto error; } - if (mlx5_mr_ctrl_init(&qp->mr_ctrl, &priv->cdev->mr_scache.dev_gen, + if (mlx5_mr_ctrl_init(&qp->mr_ctrl, priv->cdev, priv->dev_config.socket_id) != 0) { DRV_LOG(ERR, "Cannot allocate MR Btree for qp %u.", (uint32_t)qp_id); diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 4952fe1455..322f234628 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -282,7 +282,6 @@ static __rte_always_inline uint32_t mlx5_rx_addr2mr(struct mlx5_rxq_data *rxq, uintptr_t addr) { struct mlx5_mr_ctrl *mr_ctrl = &rxq->mr_ctrl; - struct mlx5_rxq_ctrl *rxq_ctrl; struct rte_mempool *mp; uint32_t lkey; @@ -291,14 +290,9 @@ mlx5_rx_addr2mr(struct mlx5_rxq_data *rxq, uintptr_t addr) MLX5_MR_CACHE_N, addr); if (likely(lkey != UINT32_MAX)) return lkey; - /* - * Slower search in the mempool database on miss. - * During queue creation rxq->sh is not yet set, so we use rxq_ctrl. - */ - rxq_ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); mp = mlx5_rxq_mprq_enabled(rxq) ? rxq->mprq_mp : rxq->mp; - return mlx5_mr_mempool2mr_bh(&rxq_ctrl->priv->sh->cdev->mr_scache, - mr_ctrl, mp, addr); + return mlx5_mr_mempool2mr_bh(&mr_ctrl->cdev->mr_scache, mr_ctrl, + mp, addr); } #define mlx5_rx_mb2mr(rxq, mb) mlx5_rx_addr2mr(rxq, (uintptr_t)((mb)->buf_addr)) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 4f02fe02b9..1fc2f0e0c1 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1455,8 +1455,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, goto error; } tmpl->type = MLX5_RXQ_TYPE_STANDARD; - if (mlx5_mr_ctrl_init(&tmpl->rxq.mr_ctrl, - &priv->sh->cdev->mr_scache.dev_gen, socket)) { + if (mlx5_mr_ctrl_init(&tmpl->rxq.mr_ctrl, priv->sh->cdev, socket)) { /* rte_errno is already set. */ goto error; } diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index ea20213a40..7fed0e7cb9 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -368,10 +368,9 @@ mlx5_tx_mb2mr(struct mlx5_txq_data *txq, struct rte_mbuf *mb) struct mlx5_mr_ctrl *mr_ctrl = &txq->mr_ctrl; struct mlx5_txq_ctrl *txq_ctrl = container_of(txq, struct mlx5_txq_ctrl, txq); - struct mlx5_priv *priv = txq_ctrl->priv; /* Take slower bottom-half on miss. */ - return mlx5_mr_mb2mr(priv->sh->cdev, &priv->mp_id, mr_ctrl, mb); + return mlx5_mr_mb2mr(mr_ctrl, mb, &txq_ctrl->priv->mp_id); } /** diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index e2a38d980a..e9ab7fa266 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -1134,8 +1134,7 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, rte_errno = ENOMEM; return NULL; } - if (mlx5_mr_ctrl_init(&tmpl->txq.mr_ctrl, - &priv->sh->cdev->mr_scache.dev_gen, socket)) { + if (mlx5_mr_ctrl_init(&tmpl->txq.mr_ctrl, priv->sh->cdev, socket)) { /* rte_errno is already set. */ goto error; } diff --git a/drivers/regex/mlx5/mlx5_regex_control.c b/drivers/regex/mlx5/mlx5_regex_control.c index 50c966a022..e40b1f20d9 100644 --- a/drivers/regex/mlx5/mlx5_regex_control.c +++ b/drivers/regex/mlx5/mlx5_regex_control.c @@ -242,8 +242,7 @@ mlx5_regex_qp_setup(struct rte_regexdev *dev, uint16_t qp_ind, nb_sq_config++; } - ret = mlx5_mr_ctrl_init(&qp->mr_ctrl, &priv->cdev->mr_scache.dev_gen, - rte_socket_id()); + ret = mlx5_mr_ctrl_init(&qp->mr_ctrl, priv->cdev, rte_socket_id()); if (ret) { DRV_LOG(ERR, "Error setting up mr btree"); goto err_btree; diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c index adb5343a46..943cb9c19e 100644 --- a/drivers/regex/mlx5/mlx5_regex_fastpath.c +++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c @@ -109,26 +109,6 @@ set_wqe_ctrl_seg(struct mlx5_wqe_ctrl_seg *seg, uint16_t pi, uint8_t opcode, seg->imm = imm; } -/** - * Query LKey from a packet buffer for QP. If not found, add the mempool. - * - * @param priv - * Pointer to the priv object. - * @param mr_ctrl - * Pointer to per-queue MR control structure. - * @param mbuf - * Pointer to source mbuf, to search in. - * - * @return - * Searched LKey on success, UINT32_MAX on no match. - */ -static inline uint32_t -mlx5_regex_mb2mr(struct mlx5_regex_priv *priv, struct mlx5_mr_ctrl *mr_ctrl, - struct rte_mbuf *mbuf) -{ - return mlx5_mr_mb2mr(priv->cdev, 0, mr_ctrl, mbuf); -} - static inline void __prep_one(struct mlx5_regex_priv *priv, struct mlx5_regex_hw_qp *qp_obj, struct rte_regex_ops *op, struct mlx5_regex_job *job, @@ -180,7 +160,7 @@ prep_one(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, struct mlx5_klm klm; klm.byte_count = rte_pktmbuf_data_len(op->mbuf); - klm.mkey = mlx5_regex_mb2mr(priv, &qp->mr_ctrl, op->mbuf); + klm.mkey = mlx5_mr_mb2mr(&qp->mr_ctrl, op->mbuf, 0); klm.address = rte_pktmbuf_mtod(op->mbuf, uintptr_t); __prep_one(priv, qp_obj, op, job, qp_obj->pi, &klm); qp_obj->db_pi = qp_obj->pi; @@ -349,9 +329,8 @@ prep_regex_umr_wqe_set(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, while (mbuf) { addr = rte_pktmbuf_mtod(mbuf, uintptr_t); /* Build indirect mkey seg's KLM. */ - mkey_klm->mkey = mlx5_regex_mb2mr(priv, - &qp->mr_ctrl, - mbuf); + mkey_klm->mkey = mlx5_mr_mb2mr(&qp->mr_ctrl, + mbuf, 0); mkey_klm->address = rte_cpu_to_be_64(addr); mkey_klm->byte_count = rte_cpu_to_be_32 (rte_pktmbuf_data_len(mbuf)); @@ -368,7 +347,7 @@ prep_regex_umr_wqe_set(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, klm.byte_count = scatter_size; } else { /* The single mubf case. Build the KLM directly. */ - klm.mkey = mlx5_regex_mb2mr(priv, &qp->mr_ctrl, mbuf); + klm.mkey = mlx5_mr_mb2mr(&qp->mr_ctrl, mbuf, 0); klm.address = rte_pktmbuf_mtod(mbuf, uintptr_t); klm.byte_count = rte_pktmbuf_data_len(mbuf); } -- 2.25.1