From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7E441A0C48; Tue, 20 Jul 2021 15:10:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A6B9B4068F; Tue, 20 Jul 2021 15:10:20 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam07on2045.outbound.protection.outlook.com [40.107.95.45]) by mails.dpdk.org (Postfix) with ESMTP id 958104068F for ; Tue, 20 Jul 2021 15:10:17 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Dg5ehj2/9Hf8gox5d2VKHpHDjLld5clCZZcGi0bVKA6RPaGmgZG0/PZGNKa8TRFOrmsZkoMc1tZI+xD9n+yakthd/IWCEbadKLa6D8nt8kR9HscusnGi22FtuP8Z0wYCo7hFcm/jv1kLP4SSpdHxuzkhZserb/yg/5SyrjsQfTI1tY2deQdlxie/iwX6JD5wTLkm/f0iltbKMciWvGq7mYLGZ0N0bMHeBylAfP7tzjul6Bcoic9AnHqQEtJuU2WdM9rZjira1/RDnUgz7XbBhNspQF/zw+bt+oQgho3lRnYB2jG8Ca4reExP4Cq19KLOkXu+vpJm760zKKiI31By7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fwIEFQQEYGU9F/1qBdAwLNpLCPwWlPLEbsCJIRJvxis=; b=LcLe4qxxa8U9kpCcVoNQLLC1x5hVbnBNpZMOsCntPdy/k5Ix0thDluTLkh5+cjuxoSuUG+gPXght3tqVmtXPOzyfWUAogyfk7H5qHZ2lUP37jvmO0RvNTlU+MJfQOCV2SGUTrVIR3O/zY2wnC7TZq+WBy4G0oTq1WrzaKo4NKdAo9mvsaPzmRjkfb5hUHG8OKgpdnx7ODLwA3Bl9PyieJYmOq9k7UduDSrR8MF+hZL9TbmWPD3VyPnKHYGQSP3YhUHyU2eUe+0tMD4wO4EP58Hu4UrNrqmGrjjZIYgYM17L3wL21Bduqqzu4d0HXgcHmvkubH6+JMhT8YaC78gXf2g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fwIEFQQEYGU9F/1qBdAwLNpLCPwWlPLEbsCJIRJvxis=; b=GzyBMkrcQvazw3GxyOcPzRGcULeaXN4aVCTcF3RZFiANGeL7P4/wdfb6UNsrOrzOaOMptbQTfBpzz4jQZy5egniIqrKz2GlPtxxIDIujBvSdRmDyseQf/SA1rabqGJArkT4JYNQc4fFD6Yb5QDktZCQ5VoOq49zTdrKHfU5LdMztdC1Fj7p3NaXOjr4NHxEewru2obGWgNDETdmve5kwf7nl4vfgFjYvdILNf7cRwBsu/3XE+07Pia910cS7eUllF+iBSxfG81OnXFgOujbtEs/bMNBt/rS4/9LMh0UHNKwlCMV6Kv8zmDOmoZV142VmJme1QAiQIWCfHjq3IB95JA== Received: from MW4P221CA0025.NAMP221.PROD.OUTLOOK.COM (2603:10b6:303:8b::30) by DM4PR12MB5277.namprd12.prod.outlook.com (2603:10b6:5:39d::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4331.24; Tue, 20 Jul 2021 13:10:16 +0000 Received: from CO1NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8b:cafe::d6) by MW4P221CA0025.outlook.office365.com (2603:10b6:303:8b::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4331.21 via Frontend Transport; Tue, 20 Jul 2021 13:10:16 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT036.mail.protection.outlook.com (10.13.174.124) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4331.21 via Frontend Transport; Tue, 20 Jul 2021 13:10:16 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 20 Jul 2021 13:10:13 +0000 From: Suanming Mou To: , CC: , , , "Michael Baum" Date: Tue, 20 Jul 2021 16:09:35 +0300 Message-ID: <20210720130944.5407-7-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210720130944.5407-1-suanmingm@nvidia.com> References: <20210408204849.9543-1-shirik@nvidia.com> <20210720130944.5407-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 8416f1ea-6103-4ea9-7689-08d94b7fb780 X-MS-TrafficTypeDiagnostic: DM4PR12MB5277: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:785; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: w8eHm7trde6HOGtGTTe8cAz0bMptraixMcS7NpscvgvfZOOZF5N8LPsB89ArEPIM2A18oI33q6GrOJ8FRpNfwUdS/gBRG3bjgneIgelh8xJT7I9ROXc43ReBwpaozCfe1rALLgqXAZ5jMnW4EMRCG5bepcOg/OZSVfSqNadbvoSO64v9H0o9sha/HrYX2yI4YrPHVNV1K1ZJd4yHpECG/ii4FJNuIpAU5q0mjznbKVItiCFOcIA7xA0yHEKJbh3YvgORLGYkrif6DoegvFV65N9ZGdjn9RjjIXFCAacSdDMZ2Z48D8q7wLa/dVoS2Puw73tFpXnkSDExO+j/Re2O+oehcDkmg9wPVtUqp2+nNTWXc44gS8tdhQLJbk6IQn616O4SWv9QDVCQHdmV1H+rRbGhRMrbkk1UzHAki6JKelVIKGP4jhYzydYHLDH2DPOp36E+eF/bqzk9UH7scBBRa+qKGmH1FqvGSLlw2amd1ks9sib425U5Uqoyeq1GKQAQUn2QGT5nAUndUk66xrFWaySG94xZ1mxE1VRCgQqvugfuIsDOTM4Zzq7K3CvwMnn/K4vFLFXx/9joeXdg5Quze2/NIrJf0IMr6RG7zYLvZFhePhSsxmg9pW3nfwhbWUA4gs+mzL3hFbQeVAOoxITcOd7qm0XlcJflgkZv5BkAc7ItjefYFLBXutpYQOWNItCqXFV2CPoqujW2NB3vU6MKkQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(376002)(39860400002)(136003)(346002)(396003)(36840700001)(46966006)(70586007)(110136005)(6666004)(70206006)(107886003)(7636003)(478600001)(336012)(7696005)(26005)(356005)(1076003)(5660300002)(186003)(4326008)(316002)(86362001)(36906005)(16526019)(8936002)(55016002)(2616005)(82310400003)(36756003)(6286002)(36860700001)(82740400003)(83380400001)(426003)(8676002)(47076005)(2906002)(54906003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jul 2021 13:10:16.0065 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8416f1ea-6103-4ea9-7689-08d94b7fb780 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5277 Subject: [dpdk-dev] [PATCH v9 06/15] crypto/mlx5: add memory region management X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Shiri Kuzin Mellanox user space drivers don't deal with physical addresses as part of a memory protection mechanism. The device translates the given virtual address to a physical address using the given memory key as an address space identifier. That's why any mbuf virtual address is moved directly to the HW descriptor(WQE). The mapping between the virtual address to the physical address is saved in MR configured by the kernel to the HW. Each MR has a key that should also be moved to the WQE by the SW. When the SW sees an unmapped address, it extends the address range and creates a MR using a system call. Add memory region cache management: - 2 level cache per queue-pair - no locks. - 1 shared cache between all the queues using a lock. Using this way, the MR key search per data-path address is optimized. Signed-off-by: Shiri Kuzin Signed-off-by: Michael Baum Acked-by: Matan Azrad --- doc/guides/cryptodevs/mlx5.rst | 6 +++ drivers/crypto/mlx5/mlx5_crypto.c | 63 +++++++++++++++++++++++++++++++ drivers/crypto/mlx5/mlx5_crypto.h | 3 ++ 3 files changed, 72 insertions(+) diff --git a/doc/guides/cryptodevs/mlx5.rst b/doc/guides/cryptodevs/mlx5.rst index ecab385c0d..c41db95d40 100644 --- a/doc/guides/cryptodevs/mlx5.rst +++ b/doc/guides/cryptodevs/mlx5.rst @@ -26,6 +26,12 @@ the MKEY is configured to perform crypto operations. The encryption does not require text to be aligned to the AES block size (128b). +For security reasons and to increase robustness, this driver only deals with virtual +memory addresses. The way resources allocations are handled by the kernel, +combined with hardware specifications that allow handling virtual memory +addresses directly, ensure that DPDK applications cannot access random +physical memory (or memory that does not belong to the current process). + The PMD uses libibverbs and libmlx5 to access the device firmware or to access the hardware components directly. There are different levels of objects and bypassing abilities. diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index 2fe2e8b871..9416590aba 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -259,6 +259,7 @@ mlx5_crypto_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id) claim_zero(mlx5_glue->devx_umem_dereg(qp->umem_obj)); if (qp->umem_buf != NULL) rte_free(qp->umem_buf); + mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh); mlx5_devx_cq_destroy(&qp->cq_obj); rte_free(qp); dev->data->queue_pairs[qp_id] = NULL; @@ -340,6 +341,14 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, DRV_LOG(ERR, "Failed to register QP umem."); goto error; } + if (mlx5_mr_btree_init(&qp->mr_ctrl.cache_bh, MLX5_MR_BTREE_CACHE_N, + priv->dev_config.socket_id) != 0) { + DRV_LOG(ERR, "Cannot allocate MR Btree for qp %u.", + (uint32_t)qp_id); + rte_errno = ENOMEM; + goto error; + } + qp->mr_ctrl.dev_gen_ptr = &priv->mr_scache.dev_gen; attr.pd = priv->pdn; attr.uar_index = mlx5_os_get_devx_uar_page_id(priv->uar); attr.cqn = qp->cq_obj.cq->id; @@ -446,6 +455,40 @@ mlx5_crypto_hw_global_prepare(struct mlx5_crypto_priv *priv) return 0; } +/** + * Callback for memory event. + * + * @param event_type + * Memory event type. + * @param addr + * Address of memory. + * @param len + * Size of memory. + */ +static void +mlx5_crypto_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, + size_t len, void *arg __rte_unused) +{ + struct mlx5_crypto_priv *priv; + + /* Must be called from the primary process. */ + MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); + switch (event_type) { + case RTE_MEM_EVENT_FREE: + pthread_mutex_lock(&priv_list_lock); + /* Iterate all the existing mlx5 devices. */ + TAILQ_FOREACH(priv, &mlx5_crypto_priv_list, next) + mlx5_free_mr_by_addr(&priv->mr_scache, + priv->ctx->device->name, + addr, len); + pthread_mutex_unlock(&priv_list_lock); + break; + case RTE_MEM_EVENT_ALLOC: + default: + break; + } +} + /** * DPDK callback to register a PCI device. * @@ -528,6 +571,22 @@ mlx5_crypto_pci_probe(struct rte_pci_driver *pci_drv, claim_zero(mlx5_glue->close_device(priv->ctx)); return -1; } + if (mlx5_mr_btree_init(&priv->mr_scache.cache, + MLX5_MR_BTREE_CACHE_N * 2, rte_socket_id()) != 0) { + DRV_LOG(ERR, "Failed to allocate shared cache MR memory."); + mlx5_crypto_hw_global_release(priv); + rte_cryptodev_pmd_destroy(priv->crypto_dev); + claim_zero(mlx5_glue->close_device(priv->ctx)); + rte_errno = ENOMEM; + return -rte_errno; + } + priv->mr_scache.reg_mr_cb = mlx5_common_verbs_reg_mr; + priv->mr_scache.dereg_mr_cb = mlx5_common_verbs_dereg_mr; + /* Register callback function for global shared MR cache management. */ + if (TAILQ_EMPTY(&mlx5_crypto_priv_list)) + rte_mem_event_callback_register("MLX5_MEM_EVENT_CB", + mlx5_crypto_mr_mem_event_cb, + NULL); pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&mlx5_crypto_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); @@ -547,6 +606,10 @@ mlx5_crypto_pci_remove(struct rte_pci_device *pdev) TAILQ_REMOVE(&mlx5_crypto_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); if (priv) { + if (TAILQ_EMPTY(&mlx5_crypto_priv_list)) + rte_mem_event_callback_unregister("MLX5_MEM_EVENT_CB", + NULL); + mlx5_mr_release_cache(&priv->mr_scache); mlx5_crypto_hw_global_release(priv); rte_cryptodev_pmd_destroy(priv->crypto_dev); claim_zero(mlx5_glue->close_device(priv->ctx)); diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index 949092cd37..af292ed19f 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -12,6 +12,7 @@ #include #include +#include #define MLX5_CRYPTO_DEK_HTABLE_SZ (1 << 11) #define MLX5_CRYPTO_KEY_LENGTH 80 @@ -27,6 +28,7 @@ struct mlx5_crypto_priv { struct ibv_pd *pd; struct mlx5_hlist *dek_hlist; /* Dek hash list. */ struct rte_cryptodev_config dev_config; + struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ }; struct mlx5_crypto_qp { @@ -36,6 +38,7 @@ struct mlx5_crypto_qp { void *umem_buf; volatile uint32_t *db_rec; struct rte_crypto_op **ops; + struct mlx5_mr_ctrl mr_ctrl; }; struct mlx5_crypto_dek { -- 2.25.1