From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A229A0C46; Thu, 8 Apr 2021 22:51:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8C97D141270; Thu, 8 Apr 2021 22:50:03 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 20C0714126B for ; Thu, 8 Apr 2021 22:50:01 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from shirik@nvidia.com) with SMTP; 8 Apr 2021 23:50:00 +0300 Received: from nvidia.com (c-236-0-60-063.mtl.labs.mlnx [10.236.0.63]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 138KnAJf028067; Thu, 8 Apr 2021 23:50:00 +0300 From: Shiri Kuzin To: dev@dpdk.org Cc: matan@nvidia.com, gakhil@marvell.com, suanmingm@nvidia.com Date: Thu, 8 Apr 2021 23:48:47 +0300 Message-Id: <20210408204849.9543-23-shirik@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210408204849.9543-1-shirik@nvidia.com> References: <1615447568-260965-1-git-send-email-matan@nvidia.com> <20210408204849.9543-1-shirik@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH 22/24] crypto/mlx5: add memory region management X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Mellanox user space drivers don't deal with physical addresses as part of a memory protection mechanism. The device translates the given virtual address to a physical address using the given memory key as an address space identifier. That's why any mbuf virtual address is moved directly to the HW descriptor(WQE). The mapping between the virtual address to the physical address is saved in MR configured by the kernel to the HW. Each MR has a key that should also be moved to the WQE by the SW. When the SW sees an unmapped address, it extends the address range and creates a MR using a system call. Add memory region cache management: - 2 level cache per queue-pair - no locks. - 1 shared cache between all the queues using a lock. Using this way, the MR key search per data-path address is optimized. Signed-off-by: Shiri Kuzin Acked-by: Matan Azrad --- drivers/crypto/mlx5/mlx5_crypto.c | 20 ++++++++++++++++++++ drivers/crypto/mlx5/mlx5_crypto.h | 3 +++ 2 files changed, 23 insertions(+) diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index cb5716ba2a..f71de5a724 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -205,6 +205,7 @@ mlx5_crypto_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id) claim_zero(mlx5_glue->devx_umem_dereg(qp->umem_obj)); if (qp->umem_buf != NULL) rte_free(qp->umem_buf); + mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh); mlx5_devx_cq_destroy(&qp->cq_obj); rte_free(qp); dev->data->queue_pairs[qp_id] = NULL; @@ -284,6 +285,13 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, DRV_LOG(ERR, "Failed to register QP umem."); goto error; } + if (mlx5_mr_btree_init(&qp->mr_ctrl.cache_bh, MLX5_MR_BTREE_CACHE_N, + priv->dev_config.socket_id) != 0) { + DRV_LOG(ERR, "Cannot allocate MR Btree for qp %u.", + (uint32_t)qp_id); + rte_errno = ENOMEM; + goto error; + } attr.pd = priv->pdn; attr.uar_index = mlx5_os_get_devx_uar_page_id(priv->uar); attr.cqn = qp->cq_obj.cq->id; @@ -472,6 +480,17 @@ mlx5_crypto_pci_probe(struct rte_pci_driver *pci_drv, claim_zero(mlx5_glue->close_device(priv->ctx)); return -1; } + if (mlx5_mr_btree_init(&priv->mr_scache.cache, + MLX5_MR_BTREE_CACHE_N * 2, rte_socket_id()) != 0) { + DRV_LOG(ERR, "Failed to allocate shared cache MR memory."); + mlx5_crypto_hw_global_release(priv); + rte_cryptodev_pmd_destroy(priv->crypto_dev); + claim_zero(mlx5_glue->close_device(priv->ctx)); + rte_errno = ENOMEM; + return -rte_errno; + } + priv->mr_scache.reg_mr_cb = mlx5_common_verbs_reg_mr; + priv->mr_scache.dereg_mr_cb = mlx5_common_verbs_dereg_mr; pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&mlx5_crypto_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); @@ -491,6 +510,7 @@ mlx5_crypto_pci_remove(struct rte_pci_device *pdev) TAILQ_REMOVE(&mlx5_crypto_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); if (priv) { + mlx5_mr_release_cache(&priv->mr_scache); mlx5_crypto_hw_global_release(priv); rte_cryptodev_pmd_destroy(priv->crypto_dev); claim_zero(mlx5_glue->close_device(priv->ctx)); diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index f5313b89f2..397267d249 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -12,6 +12,7 @@ #include #include +#include #define MLX5_CRYPTO_DEK_HTABLE_SZ (1 << 11) #define MLX5_CRYPTO_KEY_LENGTH 80 @@ -27,6 +28,7 @@ struct mlx5_crypto_priv { struct ibv_pd *pd; struct mlx5_hlist *dek_hlist; /* Dek hash list. */ struct rte_cryptodev_config dev_config; + struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ }; struct mlx5_crypto_qp { @@ -36,6 +38,7 @@ struct mlx5_crypto_qp { void *umem_buf; volatile uint32_t *db_rec; struct rte_crypto_op **ops; + struct mlx5_mr_ctrl mr_ctrl; }; struct mlx5_crypto_dek { -- 2.21.0