From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 85C3CA0C4D; Thu, 7 Oct 2021 00:06:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7047441406; Thu, 7 Oct 2021 00:04:47 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2057.outbound.protection.outlook.com [40.107.100.57]) by mails.dpdk.org (Postfix) with ESMTP id 4550441256 for ; Thu, 7 Oct 2021 00:04:45 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=D4nyq9XAQf3y9RVMiz0uvqdtXwt3yoim4sxFNE1Rhkn5Tcs/E2L0lx0dl7xxgJ+v5GCmWDkFqy/4+2XJKcr9Q96UjBHjmt7NOef8tH2e1jLr0VIcbsgunmTywgmCn8P4OSg7B+wAyrIcfxFYKCk/2/JssywmnoSx1gmI/RsTQbA4WvXY3XVcLYNBjapWP0U8dIiaBmqRUPC4RdQGYsPNSdyp0a2Pwy0e37g9OUnbSg+lDHCKFKraaBZzjiNJsn4J0u5/H/5/9u1CpZ+pYoXb0a6b4pAeOBIZf6IWp0plllKAErPKj5P0ST7TSbewz4VmTDthlJIHCKGREDqK3R4iSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9jHZzcqwO4jRf5N/y5Cbyea0Jhny7qXkYMvDskkdcPY=; b=az0jxzaX4H+OJxsZD3COYvTTpGnKGekR1SY6B7ruJECagF8tNVoSIkeHgLmuL6EkxpVU0gbtuT1pRXkkbDgRE2wz9R+vCjz/NkGnhKwQOm+EuYFskl6qQ4xHPE194nbuLTnS7Uxx8TcX1KswYXxaG1FK06hIXAH2WlHQLzw3JOdYlC6kFXXtikm26r6njwIUKjSgNiot0/4AF6vk1tLFRbeoKOiWaohxOS67GQCtLXq5zIbyMBYmpi6evbo6FLSDklxOm57iJyOCgJJbsfI60ialstr3fyiYzhn8qi2nO/6W4ee6EUM58H1BMN40VZlnNezDFOj2PH1a13ICMdhfXw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9jHZzcqwO4jRf5N/y5Cbyea0Jhny7qXkYMvDskkdcPY=; b=n1slHGmqlDR4UuKI0FhiWqBUg93NT8MNi3eSnyaseTiy9xcKVcqo1lm7Pxe2pXIuP3AnVFSnq8NykiVC3yAYw3uWtok3Q1u/BfkIi/EbP1aly2WcmhQ0b6w8xYldLH8NoKfEwfLocNN5sp5TiywpGGHF3V3OqLS0ntdwEDa+nE+DsUtLV65MA6CfUtKSq0f4WDES8w1HQR41VHZzjJWPWskcFXbqtpixAc12hpd318wRyLmHy4/SBUsZ9m25w64lvN0XupivjjK/bAHJKUftDF1dSA1WS1KhoqsFB9KBVBFLwmARfgafHsaaBA+3e+9x+87LbezSZzhcjsoLPwH5Jw== Received: from CO2PR05CA0060.namprd05.prod.outlook.com (2603:10b6:102:2::28) by DM6PR12MB4073.namprd12.prod.outlook.com (2603:10b6:5:217::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.14; Wed, 6 Oct 2021 22:04:43 +0000 Received: from CO1NAM11FT023.eop-nam11.prod.protection.outlook.com (2603:10b6:102:2:cafe::37) by CO2PR05CA0060.outlook.office365.com (2603:10b6:102:2::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.4 via Frontend Transport; Wed, 6 Oct 2021 22:04:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT023.mail.protection.outlook.com (10.13.175.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4587.18 via Frontend Transport; Wed, 6 Oct 2021 22:04:43 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 6 Oct 2021 22:04:39 +0000 From: To: CC: Matan Azrad , Thomas Monjalon , Michael Baum Date: Thu, 7 Oct 2021 01:03:50 +0300 Message-ID: <20211006220350.2357487-19-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211006220350.2357487-1-michaelba@nvidia.com> References: <20210930172822.1949969-1-michaelba@nvidia.com> <20211006220350.2357487-1-michaelba@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 87a66428-c8e6-4011-ff69-08d989154d28 X-MS-TrafficTypeDiagnostic: DM6PR12MB4073: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:138; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: fgTIHH2Dvp6yGv/VSqzCKnwMLlHhpbjpaKgEmMgZOkROhITkJvTydqOipmZ9/Pb3dZHW1Iz0zefThHXDLP+d8aVgfGOZ3pWe5gcjhzQnnrLFBpJ46wgIOjUCfXSNsVZLFrsrgmvMRq2t8zQJGRs7g1kpTzTjLvncxauV6W7+EXTROzjWjPqIxkK3W0Mno38cZf0Q/NSdIPIJZFx44zgSHZtQ8NqlDC8vZps0i+OWmu4SaLfh9MmDxq65z/LZVnGnK+0INiFHAReAYB3pHJUGvuxGjQV6Vyv7Uin4cw7eBypXst4ahWz9U35GH/qHdIJgF4+UKc9F+3XTCnJP+8TlssxOgshjDWayt+19pCfwf/LZCEfx3FIj37XvnHTMFXYTInF1aY11Xol35oDtdV/k+u0FdD0gGlfaVJZIonZI2GLDO7EtSCHSjp0G+CWwfxNuAqkEQTO9TJs7LJyWF35kJNvaft0rpHiogm9lIrzRqORTYefjzu/lE8lKaN1TeVut7wM1pfRec8gbu+ntsiTdNrq6OGArRHdae0mTF391lRu6G0hr93VWQ0XiF8Rec/rKbWyVxLm4sFyCXjYGGtZo5x946x0ijoh1jH6R5ro09eF2U9PZYqqvad7tQ069NS3s4sh/2A2d5qbws4F1Wjg/sdcmQFAwKl/Icn7IaIrvnZc84G/tmtvmDbDd7ImV+D9ooKWoWvinbX6DT9nw5Mli/w== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(55016002)(508600001)(70206006)(70586007)(6666004)(7696005)(36756003)(1076003)(86362001)(36860700001)(26005)(107886003)(6286002)(16526019)(8676002)(316002)(356005)(426003)(8936002)(186003)(7636003)(82310400003)(2876002)(6916009)(83380400001)(54906003)(2906002)(5660300002)(336012)(47076005)(4326008)(2616005)(30864003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Oct 2021 22:04:43.0348 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 87a66428-c8e6-4011-ff69-08d989154d28 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT023.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4073 Subject: [dpdk-dev] [PATCH v2 18/18] common/mlx5: share MR mempool registration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Michael Baum Expand the use of mempool registration to MR management for other drivers. Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common.c | 148 ++++++++++++++++++++++++++ drivers/common/mlx5/mlx5_common.h | 9 ++ drivers/common/mlx5/mlx5_common_mp.h | 11 ++ drivers/common/mlx5/mlx5_common_mr.c | 94 +++++++++++++--- drivers/common/mlx5/mlx5_common_mr.h | 41 ++++++- drivers/common/mlx5/version.map | 6 +- drivers/compress/mlx5/mlx5_compress.c | 5 +- drivers/crypto/mlx5/mlx5_crypto.c | 5 +- drivers/net/mlx5/linux/mlx5_mp_os.c | 3 +- drivers/net/mlx5/meson.build | 1 - drivers/net/mlx5/mlx5.c | 106 ++---------------- drivers/net/mlx5/mlx5.h | 13 --- drivers/net/mlx5/mlx5_mr.c | 89 ---------------- drivers/net/mlx5/mlx5_rx.c | 15 +-- drivers/net/mlx5/mlx5_rx.h | 14 --- drivers/net/mlx5/mlx5_rxq.c | 1 + drivers/net/mlx5/mlx5_rxtx.h | 26 ----- drivers/net/mlx5/mlx5_tx.h | 27 ++--- drivers/regex/mlx5/mlx5_regex.c | 6 +- 19 files changed, 322 insertions(+), 298 deletions(-) delete mode 100644 drivers/net/mlx5/mlx5_mr.c diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c index 0ed1477eb8..e6ff045c95 100644 --- a/drivers/common/mlx5/mlx5_common.c +++ b/drivers/common/mlx5/mlx5_common.c @@ -13,6 +13,7 @@ #include "mlx5_common.h" #include "mlx5_common_os.h" +#include "mlx5_common_mp.h" #include "mlx5_common_log.h" #include "mlx5_common_defs.h" #include "mlx5_common_private.h" @@ -302,6 +303,152 @@ mlx5_dev_to_pci_str(const struct rte_device *dev, char *addr, size_t size) #endif } +/** + * Register the mempool for the protection domain. + * + * @param cdev + * Pointer to the mlx5 common device. + * @param mp + * Mempool being registered. + * + * @return + * 0 on success, (-1) on failure and rte_errno is set. + */ +static int +mlx5_dev_mempool_register(struct mlx5_common_device *cdev, + struct rte_mempool *mp) +{ + struct mlx5_mp_id mp_id; + + mlx5_mp_id_init(&mp_id, 0); + return mlx5_mr_mempool_register(&cdev->mr_scache, cdev->pd, mp, &mp_id); +} + +/** + * Unregister the mempool from the protection domain. + * + * @param cdev + * Pointer to the mlx5 common device. + * @param mp + * Mempool being unregistered. + */ +void +mlx5_dev_mempool_unregister(struct mlx5_common_device *cdev, + struct rte_mempool *mp) +{ + struct mlx5_mp_id mp_id; + + mlx5_mp_id_init(&mp_id, 0); + if (mlx5_mr_mempool_unregister(&cdev->mr_scache, mp, &mp_id) < 0) + DRV_LOG(WARNING, "Failed to unregister mempool %s for PD %p: %s", + mp->name, cdev->pd, rte_strerror(rte_errno)); +} + +/** + * rte_mempool_walk() callback to register mempools for the protection domain. + * + * @param mp + * The mempool being walked. + * @param arg + * Pointer to the device shared context. + */ +static void +mlx5_dev_mempool_register_cb(struct rte_mempool *mp, void *arg) +{ + struct mlx5_common_device *cdev = arg; + int ret; + + ret = mlx5_dev_mempool_register(cdev, mp); + if (ret < 0 && rte_errno != EEXIST) + DRV_LOG(ERR, + "Failed to register existing mempool %s for PD %p: %s", + mp->name, cdev->pd, rte_strerror(rte_errno)); +} + +/** + * rte_mempool_walk() callback to unregister mempools + * from the protection domain. + * + * @param mp + * The mempool being walked. + * @param arg + * Pointer to the device shared context. + */ +static void +mlx5_dev_mempool_unregister_cb(struct rte_mempool *mp, void *arg) +{ + mlx5_dev_mempool_unregister((struct mlx5_common_device *)arg, mp); +} + +/** + * Mempool life cycle callback for mlx5 common devices. + * + * @param event + * Mempool life cycle event. + * @param mp + * Associated mempool. + * @param arg + * Pointer to a device shared context. + */ +static void +mlx5_dev_mempool_event_cb(enum rte_mempool_event event, struct rte_mempool *mp, + void *arg) +{ + struct mlx5_common_device *cdev = arg; + + switch (event) { + case RTE_MEMPOOL_EVENT_READY: + if (mlx5_dev_mempool_register(cdev, mp) < 0) + DRV_LOG(ERR, + "Failed to register new mempool %s for PD %p: %s", + mp->name, cdev->pd, rte_strerror(rte_errno)); + break; + case RTE_MEMPOOL_EVENT_DESTROY: + mlx5_dev_mempool_unregister(cdev, mp); + break; + } +} + +int +mlx5_dev_mempool_subscribe(struct mlx5_common_device *cdev) +{ + int ret = 0; + + if (!cdev->config.mr_mempool_reg_en) + return 0; + rte_rwlock_write_lock(&cdev->mr_scache.mprwlock); + if (cdev->mr_scache.mp_cb_registered) + goto exit; + /* Callback for this device may be already registered. */ + ret = rte_mempool_event_callback_register(mlx5_dev_mempool_event_cb, + cdev); + if (ret != 0 && rte_errno != EEXIST) + goto exit; + /* Register mempools only once for this device. */ + if (ret == 0) + rte_mempool_walk(mlx5_dev_mempool_register_cb, cdev); + ret = 0; + cdev->mr_scache.mp_cb_registered = 1; +exit: + rte_rwlock_write_unlock(&cdev->mr_scache.mprwlock); + return ret; +} + +static void +mlx5_dev_mempool_unsubscribe(struct mlx5_common_device *cdev) +{ + int ret; + + if (!cdev->mr_scache.mp_cb_registered || + !cdev->config.mr_mempool_reg_en) + return; + /* Stop watching for mempool events and unregister all mempools. */ + ret = rte_mempool_event_callback_unregister(mlx5_dev_mempool_event_cb, + cdev); + if (ret == 0) + rte_mempool_walk(mlx5_dev_mempool_unregister_cb, cdev); +} + /** * Callback for memory event. * @@ -409,6 +556,7 @@ mlx5_common_dev_release(struct mlx5_common_device *cdev) if (TAILQ_EMPTY(&devices_list)) rte_mem_event_callback_unregister("MLX5_MEM_EVENT_CB", NULL); + mlx5_dev_mempool_unsubscribe(cdev); mlx5_mr_release_cache(&cdev->mr_scache); mlx5_dev_hw_global_release(cdev); } diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index 72ff0ff809..744c6a72b3 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -408,6 +408,15 @@ __rte_internal bool mlx5_dev_is_pci(const struct rte_device *dev); +__rte_internal +int +mlx5_dev_mempool_subscribe(struct mlx5_common_device *cdev); + +__rte_internal +void +mlx5_dev_mempool_unregister(struct mlx5_common_device *cdev, + struct rte_mempool *mp); + /* mlx5_common_mr.c */ __rte_internal diff --git a/drivers/common/mlx5/mlx5_common_mp.h b/drivers/common/mlx5/mlx5_common_mp.h index 527bf3cad8..2276dc921c 100644 --- a/drivers/common/mlx5/mlx5_common_mp.h +++ b/drivers/common/mlx5/mlx5_common_mp.h @@ -64,6 +64,17 @@ struct mlx5_mp_id { uint16_t port_id; }; +/** Key string for IPC. */ +#define MLX5_MP_NAME "common_mlx5_mp" + +/** Initialize a multi-process ID. */ +static inline void +mlx5_mp_id_init(struct mlx5_mp_id *mp_id, uint16_t port_id) +{ + mp_id->port_id = port_id; + strlcpy(mp_id->name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN); +} + /** Request timeout for IPC. */ #define MLX5_MP_REQ_TIMEOUT_SEC 5 diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c index 5bfddac08e..b582e28d59 100644 --- a/drivers/common/mlx5/mlx5_common_mr.c +++ b/drivers/common/mlx5/mlx5_common_mr.c @@ -12,8 +12,10 @@ #include #include "mlx5_glue.h" +#include "mlx5_common.h" #include "mlx5_common_mp.h" #include "mlx5_common_mr.h" +#include "mlx5_common_os.h" #include "mlx5_common_log.h" #include "mlx5_malloc.h" @@ -47,6 +49,20 @@ struct mlx5_mempool_reg { unsigned int mrs_n; }; +void +mlx5_mprq_buf_free_cb(void *addr __rte_unused, void *opaque) +{ + struct mlx5_mprq_buf *buf = opaque; + + if (__atomic_load_n(&buf->refcnt, __ATOMIC_RELAXED) == 1) { + rte_mempool_put(buf->mp, buf); + } else if (unlikely(__atomic_sub_fetch(&buf->refcnt, 1, + __ATOMIC_RELAXED) == 0)) { + __atomic_store_n(&buf->refcnt, 1, __ATOMIC_RELAXED); + rte_mempool_put(buf->mp, buf); + } +} + /** * Expand B-tree table to a given size. Can't be called with holding * memory_hotplug_lock or share_cache.rwlock due to rte_realloc(). @@ -600,6 +616,10 @@ mlx5_mr_create_secondary(void *pd __rte_unused, { int ret; + if (mp_id == NULL) { + rte_errno = EINVAL; + return UINT32_MAX; + } DRV_LOG(DEBUG, "port %u requesting MR creation for address (%p)", mp_id->port_id, (void *)addr); ret = mlx5_mp_req_mr_create(mp_id, addr); @@ -995,10 +1015,11 @@ mr_lookup_caches(void *pd, struct mlx5_mp_id *mp_id, * @return * Searched LKey on success, UINT32_MAX on no match. */ -uint32_t mlx5_mr_addr2mr_bh(void *pd, struct mlx5_mp_id *mp_id, - struct mlx5_mr_share_cache *share_cache, - struct mlx5_mr_ctrl *mr_ctrl, - uintptr_t addr, unsigned int mr_ext_memseg_en) +static uint32_t +mlx5_mr_addr2mr_bh(void *pd, struct mlx5_mp_id *mp_id, + struct mlx5_mr_share_cache *share_cache, + struct mlx5_mr_ctrl *mr_ctrl, uintptr_t addr, + unsigned int mr_ext_memseg_en) { uint32_t lkey; uint16_t bh_idx = 0; @@ -1029,7 +1050,7 @@ uint32_t mlx5_mr_addr2mr_bh(void *pd, struct mlx5_mp_id *mp_id, } /** - * Release all the created MRs and resources on global MR cache of a device. + * Release all the created MRs and resources on global MR cache of a device * list. * * @param share_cache @@ -1076,6 +1097,8 @@ mlx5_mr_create_cache(struct mlx5_mr_share_cache *share_cache, int socket) mlx5_os_set_reg_mr_cb(&share_cache->reg_mr_cb, &share_cache->dereg_mr_cb); rte_rwlock_init(&share_cache->rwlock); + rte_rwlock_init(&share_cache->mprwlock); + share_cache->mp_cb_registered = 0; /* Initialize B-tree and allocate memory for global MR cache table. */ return mlx5_mr_btree_init(&share_cache->cache, MLX5_MR_BTREE_CACHE_N * 2, socket); @@ -1245,8 +1268,8 @@ mlx5_free_mr_by_addr(struct mlx5_mr_share_cache *share_cache, /** * Dump all the created MRs and the global cache entries. * - * @param sh - * Pointer to Ethernet device shared context. + * @param share_cache + * Pointer to a global shared MR cache. */ void mlx5_mr_dump_cache(struct mlx5_mr_share_cache *share_cache __rte_unused) @@ -1581,8 +1604,7 @@ mlx5_mr_mempool_register_primary(struct mlx5_mr_share_cache *share_cache, mpr = mlx5_mempool_reg_lookup(share_cache, mp); if (mpr == NULL) { mlx5_mempool_reg_attach(new_mpr); - LIST_INSERT_HEAD(&share_cache->mempool_reg_list, - new_mpr, next); + LIST_INSERT_HEAD(&share_cache->mempool_reg_list, new_mpr, next); ret = 0; } rte_rwlock_write_unlock(&share_cache->rwlock); @@ -1837,6 +1859,56 @@ mlx5_mr_mempool2mr_bh(struct mlx5_mr_share_cache *share_cache, return lkey; } +/** + * Bottom-half of LKey search on. If supported, lookup for the address from + * the mempool. Otherwise, search in old mechanism caches. + * + * @param cdev + * Pointer to mlx5 device. + * @param mp_id + * Multi-process identifier, may be NULL for the primary process. + * @param mr_ctrl + * Pointer to per-queue MR control structure. + * @param mb + * Pointer to mbuf. + * + * @return + * Searched LKey on success, UINT32_MAX on no match. + */ +static uint32_t +mlx5_mr_mb2mr_bh(struct mlx5_common_device *cdev, struct mlx5_mp_id *mp_id, + struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mb) +{ + uint32_t lkey; + uintptr_t addr = (uintptr_t)mb->buf_addr; + + if (cdev->config.mr_mempool_reg_en) { + struct rte_mempool *mp = NULL; + struct mlx5_mprq_buf *buf; + + if (!RTE_MBUF_HAS_EXTBUF(mb)) { + mp = mlx5_mb2mp(mb); + } else if (mb->shinfo->free_cb == mlx5_mprq_buf_free_cb) { + /* Recover MPRQ mempool. */ + buf = mb->shinfo->fcb_opaque; + mp = buf->mp; + } + if (mp != NULL) { + lkey = mlx5_mr_mempool2mr_bh(&cdev->mr_scache, + mr_ctrl, mp, addr); + /* + * Lookup can only fail on invalid input, e.g. "addr" + * is not from "mp" or "mp" has MEMPOOL_F_NON_IO set. + */ + if (lkey != UINT32_MAX) + return lkey; + } + /* Fallback for generic mechanism in corner cases. */ + } + return mlx5_mr_addr2mr_bh(cdev->pd, mp_id, &cdev->mr_scache, mr_ctrl, + addr, cdev->config.mr_ext_memseg_en); +} + /** * Query LKey from a packet buffer. * @@ -1857,7 +1929,6 @@ mlx5_mr_mb2mr(struct mlx5_common_device *cdev, struct mlx5_mp_id *mp_id, struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mbuf) { uint32_t lkey; - uintptr_t addr = (uintptr_t)mbuf->buf_addr; /* Check generation bit to see if there's any change on existing MRs. */ if (unlikely(*mr_ctrl->dev_gen_ptr != mr_ctrl->cur_gen)) @@ -1868,6 +1939,5 @@ mlx5_mr_mb2mr(struct mlx5_common_device *cdev, struct mlx5_mp_id *mp_id, if (likely(lkey != UINT32_MAX)) return lkey; /* Take slower bottom-half on miss. */ - return mlx5_mr_addr2mr_bh(cdev->pd, mp_id, &cdev->mr_scache, mr_ctrl, - addr, cdev->config.mr_ext_memseg_en); + return mlx5_mr_mb2mr_bh(cdev, mp_id, mr_ctrl, mbuf); } diff --git a/drivers/common/mlx5/mlx5_common_mr.h b/drivers/common/mlx5/mlx5_common_mr.h index 8a7af05ca5..e74f81641c 100644 --- a/drivers/common/mlx5/mlx5_common_mr.h +++ b/drivers/common/mlx5/mlx5_common_mr.h @@ -79,6 +79,8 @@ LIST_HEAD(mlx5_mempool_reg_list, mlx5_mempool_reg); struct mlx5_mr_share_cache { uint32_t dev_gen; /* Generation number to flush local caches. */ rte_rwlock_t rwlock; /* MR cache Lock. */ + rte_rwlock_t mprwlock; /* Mempool Registration Lock. */ + uint8_t mp_cb_registered; /* Mempool are Registered. */ struct mlx5_mr_btree cache; /* Global MR cache table. */ struct mlx5_mr_list mr_list; /* Registered MR list. */ struct mlx5_mr_list mr_free_list; /* Freed MR list. */ @@ -87,6 +89,40 @@ struct mlx5_mr_share_cache { mlx5_dereg_mr_t dereg_mr_cb; /* Callback to dereg_mr func */ } __rte_packed; +/* Multi-Packet RQ buffer header. */ +struct mlx5_mprq_buf { + struct rte_mempool *mp; + uint16_t refcnt; /* Atomically accessed refcnt. */ + uint8_t pad[RTE_PKTMBUF_HEADROOM]; /* Headroom for the first packet. */ + struct rte_mbuf_ext_shared_info shinfos[]; + /* + * Shared information per stride. + * More memory will be allocated for the first stride head-room and for + * the strides data. + */ +} __rte_cache_aligned; + +__rte_internal +void mlx5_mprq_buf_free_cb(void *addr, void *opaque); + +/** + * Get Memory Pool (MP) from mbuf. If mbuf is indirect, the pool from which the + * cloned mbuf is allocated is returned instead. + * + * @param buf + * Pointer to mbuf. + * + * @return + * Memory pool where data is located for given mbuf. + */ +static inline struct rte_mempool * +mlx5_mb2mp(struct rte_mbuf *buf) +{ + if (unlikely(RTE_MBUF_CLONED(buf))) + return rte_mbuf_from_indirect(buf)->pool; + return buf->pool; +} + /** * Look up LKey from given lookup table by linear search. Firstly look up the * last-hit entry. If miss, the entire array is searched. If found, update the @@ -133,11 +169,6 @@ __rte_internal void mlx5_mr_btree_free(struct mlx5_mr_btree *bt); void mlx5_mr_btree_dump(struct mlx5_mr_btree *bt __rte_unused); __rte_internal -uint32_t mlx5_mr_addr2mr_bh(void *pd, struct mlx5_mp_id *mp_id, - struct mlx5_mr_share_cache *share_cache, - struct mlx5_mr_ctrl *mr_ctrl, - uintptr_t addr, unsigned int mr_ext_memseg_en); -__rte_internal uint32_t mlx5_mr_mempool2mr_bh(struct mlx5_mr_share_cache *share_cache, struct mlx5_mr_ctrl *mr_ctrl, struct rte_mempool *mp, uintptr_t addr); diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index 28a0944a93..1167fcd323 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -13,6 +13,8 @@ INTERNAL { mlx5_common_verbs_dereg_mr; # WINDOWS_NO_EXPORT mlx5_dev_is_pci; + mlx5_dev_mempool_unregister; + mlx5_dev_mempool_subscribe; mlx5_devx_alloc_uar; # WINDOWS_NO_EXPORT @@ -104,10 +106,10 @@ INTERNAL { mlx5_mp_uninit_primary; # WINDOWS_NO_EXPORT mlx5_mp_uninit_secondary; # WINDOWS_NO_EXPORT - mlx5_mr_addr2mr_bh; + mlx5_mprq_buf_free_cb; mlx5_mr_btree_free; mlx5_mr_create_primary; - mlx5_mr_ctrl_init; + mlx5_mr_ctrl_init; mlx5_mr_flush_local_cache; mlx5_mr_mb2mr; diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 91f5ffdf87..bc42b2f755 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -386,8 +386,9 @@ mlx5_compress_dev_stop(struct rte_compressdev *dev) static int mlx5_compress_dev_start(struct rte_compressdev *dev) { - RTE_SET_USED(dev); - return 0; + struct mlx5_compress_priv *priv = dev->data->dev_private; + + return mlx5_dev_mempool_subscribe(priv->cdev); } static void diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index 064501ba8c..4cf4c0ec8e 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -142,8 +142,9 @@ mlx5_crypto_dev_stop(struct rte_cryptodev *dev) static int mlx5_crypto_dev_start(struct rte_cryptodev *dev) { - RTE_SET_USED(dev); - return 0; + struct mlx5_crypto_priv *priv = dev->data->dev_private; + + return mlx5_dev_mempool_subscribe(priv->cdev); } static int diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c b/drivers/net/mlx5/linux/mlx5_mp_os.c index c3b6495d9e..017a731b3f 100644 --- a/drivers/net/mlx5/linux/mlx5_mp_os.c +++ b/drivers/net/mlx5/linux/mlx5_mp_os.c @@ -90,8 +90,7 @@ mlx5_mp_os_primary_handle(const struct rte_mp_msg *mp_msg, const void *peer) switch (param->type) { case MLX5_MP_REQ_CREATE_MR: mp_init_msg(&priv->mp_id, &mp_res, param->type); - lkey = mlx5_mr_create_primary(cdev->pd, - &priv->sh->cdev->mr_scache, + lkey = mlx5_mr_create_primary(cdev->pd, &cdev->mr_scache, &entry, param->args.addr, cdev->config.mr_ext_memseg_en); if (lkey == UINT32_MAX) diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index dac7f1fabf..636a1be890 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -18,7 +18,6 @@ sources = files( 'mlx5_flow_dv.c', 'mlx5_flow_aso.c', 'mlx5_mac.c', - 'mlx5_mr.c', 'mlx5_rss.c', 'mlx5_rx.c', 'mlx5_rxmode.c', diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 17113be873..e9aa41432e 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1097,28 +1097,8 @@ mlx5_alloc_rxtx_uars(struct mlx5_dev_ctx_shared *sh, } /** - * Unregister the mempool from the protection domain. - * - * @param sh - * Pointer to the device shared context. - * @param mp - * Mempool being unregistered. - */ -static void -mlx5_dev_ctx_shared_mempool_unregister(struct mlx5_dev_ctx_shared *sh, - struct rte_mempool *mp) -{ - struct mlx5_mp_id mp_id; - - mlx5_mp_id_init(&mp_id, 0); - if (mlx5_mr_mempool_unregister(&sh->cdev->mr_scache, mp, &mp_id) < 0) - DRV_LOG(WARNING, "Failed to unregister mempool %s for PD %p: %s", - mp->name, sh->cdev->pd, rte_strerror(rte_errno)); -} - -/** - * rte_mempool_walk() callback to register mempools - * for the protection domain. + * rte_mempool_walk() callback to unregister Rx mempools. + * It used when implicit mempool registration is disabled. * * @param mp * The mempool being walked. @@ -1126,66 +1106,11 @@ mlx5_dev_ctx_shared_mempool_unregister(struct mlx5_dev_ctx_shared *sh, * Pointer to the device shared context. */ static void -mlx5_dev_ctx_shared_mempool_register_cb(struct rte_mempool *mp, void *arg) +mlx5_dev_ctx_shared_rx_mempool_unregister_cb(struct rte_mempool *mp, void *arg) { struct mlx5_dev_ctx_shared *sh = arg; - struct mlx5_mp_id mp_id; - int ret; - mlx5_mp_id_init(&mp_id, 0); - ret = mlx5_mr_mempool_register(&sh->cdev->mr_scache, sh->cdev->pd, mp, - &mp_id); - if (ret < 0 && rte_errno != EEXIST) - DRV_LOG(ERR, "Failed to register existing mempool %s for PD %p: %s", - mp->name, sh->cdev->pd, rte_strerror(rte_errno)); -} - -/** - * rte_mempool_walk() callback to unregister mempools - * from the protection domain. - * - * @param mp - * The mempool being walked. - * @param arg - * Pointer to the device shared context. - */ -static void -mlx5_dev_ctx_shared_mempool_unregister_cb(struct rte_mempool *mp, void *arg) -{ - mlx5_dev_ctx_shared_mempool_unregister - ((struct mlx5_dev_ctx_shared *)arg, mp); -} - -/** - * Mempool life cycle callback for Ethernet devices. - * - * @param event - * Mempool life cycle event. - * @param mp - * Associated mempool. - * @param arg - * Pointer to a device shared context. - */ -static void -mlx5_dev_ctx_shared_mempool_event_cb(enum rte_mempool_event event, - struct rte_mempool *mp, void *arg) -{ - struct mlx5_dev_ctx_shared *sh = arg; - struct mlx5_mp_id mp_id; - - switch (event) { - case RTE_MEMPOOL_EVENT_READY: - mlx5_mp_id_init(&mp_id, 0); - if (mlx5_mr_mempool_register(&sh->cdev->mr_scache, sh->cdev->pd, - mp, &mp_id) < 0) - DRV_LOG(ERR, "Failed to register new mempool %s for PD %p: %s", - mp->name, sh->cdev->pd, - rte_strerror(rte_errno)); - break; - case RTE_MEMPOOL_EVENT_DESTROY: - mlx5_dev_ctx_shared_mempool_unregister(sh, mp); - break; - } + mlx5_dev_mempool_unregister(sh->cdev, mp); } /** @@ -1206,7 +1131,7 @@ mlx5_dev_ctx_shared_rx_mempool_event_cb(enum rte_mempool_event event, struct mlx5_dev_ctx_shared *sh = arg; if (event == RTE_MEMPOOL_EVENT_DESTROY) - mlx5_dev_ctx_shared_mempool_unregister(sh, mp); + mlx5_dev_mempool_unregister(sh->cdev, mp); } int @@ -1222,15 +1147,7 @@ mlx5_dev_ctx_shared_mempool_subscribe(struct rte_eth_dev *dev) (mlx5_dev_ctx_shared_rx_mempool_event_cb, sh); return ret == 0 || rte_errno == EEXIST ? 0 : ret; } - /* Callback for this shared context may be already registered. */ - ret = rte_mempool_event_callback_register - (mlx5_dev_ctx_shared_mempool_event_cb, sh); - if (ret != 0 && rte_errno != EEXIST) - return ret; - /* Register mempools only once for this shared context. */ - if (ret == 0) - rte_mempool_walk(mlx5_dev_ctx_shared_mempool_register_cb, sh); - return 0; + return mlx5_dev_mempool_subscribe(sh->cdev); } /** @@ -1414,14 +1331,13 @@ mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) if (--sh->refcnt) goto exit; /* Stop watching for mempool events and unregister all mempools. */ - ret = rte_mempool_event_callback_unregister - (mlx5_dev_ctx_shared_mempool_event_cb, sh); - if (ret < 0 && rte_errno == ENOENT) + if (!sh->cdev->config.mr_mempool_reg_en) { ret = rte_mempool_event_callback_unregister (mlx5_dev_ctx_shared_rx_mempool_event_cb, sh); - if (ret == 0) - rte_mempool_walk(mlx5_dev_ctx_shared_mempool_unregister_cb, - sh); + if (ret == 0) + rte_mempool_walk + (mlx5_dev_ctx_shared_rx_mempool_unregister_cb, sh); + } /* Remove context from the global device list. */ LIST_REMOVE(sh, next); /* Release flow workspaces objects on the last device. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 4f823baa6d..059d400384 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -153,17 +153,6 @@ struct mlx5_flow_dump_ack { int rc; /**< Return code. */ }; -/** Key string for IPC. */ -#define MLX5_MP_NAME "net_mlx5_mp" - -/** Initialize a multi-process ID. */ -static inline void -mlx5_mp_id_init(struct mlx5_mp_id *mp_id, uint16_t port_id) -{ - mp_id->port_id = port_id; - strlcpy(mp_id->name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN); -} - LIST_HEAD(mlx5_dev_list, mlx5_dev_ctx_shared); /* Shared data between primary and secondary processes. */ @@ -172,8 +161,6 @@ struct mlx5_shared_data { /* Global spinlock for primary and secondary processes. */ int init_done; /* Whether primary has done initialization. */ unsigned int secondary_cnt; /* Number of secondary processes init'd. */ - struct mlx5_dev_list mem_event_cb_list; - rte_rwlock_t mem_event_rwlock; }; /* Per-process data structure, not visible to other processes. */ diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c deleted file mode 100644 index ac3d8e2492..0000000000 --- a/drivers/net/mlx5/mlx5_mr.c +++ /dev/null @@ -1,89 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright 2016 6WIND S.A. - * Copyright 2016 Mellanox Technologies, Ltd - */ - -#include -#include -#include -#include - -#include -#include - -#include "mlx5.h" -#include "mlx5_rxtx.h" -#include "mlx5_rx.h" -#include "mlx5_tx.h" - -/** - * Bottom-half of LKey search on Tx. - * - * @param txq - * Pointer to Tx queue structure. - * @param addr - * Search key. - * - * @return - * Searched LKey on success, UINT32_MAX on no match. - */ -static uint32_t -mlx5_tx_addr2mr_bh(struct mlx5_txq_data *txq, uintptr_t addr) -{ - struct mlx5_txq_ctrl *txq_ctrl = - container_of(txq, struct mlx5_txq_ctrl, txq); - struct mlx5_mr_ctrl *mr_ctrl = &txq->mr_ctrl; - struct mlx5_priv *priv = txq_ctrl->priv; - - return mlx5_mr_addr2mr_bh(priv->sh->cdev->pd, &priv->mp_id, - &priv->sh->cdev->mr_scache, mr_ctrl, addr, - priv->sh->cdev->config.mr_ext_memseg_en); -} - -/** - * Bottom-half of LKey search on Tx. If it can't be searched in the memseg - * list, register the mempool of the mbuf as externally allocated memory. - * - * @param txq - * Pointer to Tx queue structure. - * @param mb - * Pointer to mbuf. - * - * @return - * Searched LKey on success, UINT32_MAX on no match. - */ -uint32_t -mlx5_tx_mb2mr_bh(struct mlx5_txq_data *txq, struct rte_mbuf *mb) -{ - struct mlx5_txq_ctrl *txq_ctrl = - container_of(txq, struct mlx5_txq_ctrl, txq); - struct mlx5_mr_ctrl *mr_ctrl = &txq->mr_ctrl; - struct mlx5_priv *priv = txq_ctrl->priv; - uintptr_t addr = (uintptr_t)mb->buf_addr; - uint32_t lkey; - - if (priv->sh->cdev->config.mr_mempool_reg_en) { - struct rte_mempool *mp = NULL; - struct mlx5_mprq_buf *buf; - - if (!RTE_MBUF_HAS_EXTBUF(mb)) { - mp = mlx5_mb2mp(mb); - } else if (mb->shinfo->free_cb == mlx5_mprq_buf_free_cb) { - /* Recover MPRQ mempool. */ - buf = mb->shinfo->fcb_opaque; - mp = buf->mp; - } - if (mp != NULL) { - lkey = mlx5_mr_mempool2mr_bh(&priv->sh->cdev->mr_scache, - mr_ctrl, mp, addr); - /* - * Lookup can only fail on invalid input, e.g. "addr" - * is not from "mp" or "mp" has MEMPOOL_F_NON_IO set. - */ - if (lkey != UINT32_MAX) - return lkey; - } - /* Fallback for generic mechanism in corner cases. */ - } - return mlx5_tx_addr2mr_bh(txq, addr); -} diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index c83c7f4a39..8fa15e9820 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -18,6 +18,7 @@ #include #include +#include #include "mlx5_autoconf.h" #include "mlx5_defs.h" @@ -1027,20 +1028,6 @@ mlx5_lro_update_hdr(uint8_t *__rte_restrict padd, mlx5_lro_update_tcp_hdr(h.tcp, cqe, phcsum, l4_type); } -void -mlx5_mprq_buf_free_cb(void *addr __rte_unused, void *opaque) -{ - struct mlx5_mprq_buf *buf = opaque; - - if (__atomic_load_n(&buf->refcnt, __ATOMIC_RELAXED) == 1) { - rte_mempool_put(buf->mp, buf); - } else if (unlikely(__atomic_sub_fetch(&buf->refcnt, 1, - __ATOMIC_RELAXED) == 0)) { - __atomic_store_n(&buf->refcnt, 1, __ATOMIC_RELAXED); - rte_mempool_put(buf->mp, buf); - } -} - void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf) { diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 42a12151fc..84a21fbfb9 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -43,19 +43,6 @@ struct rxq_zip { uint32_t cqe_cnt; /* Number of CQEs. */ }; -/* Multi-Packet RQ buffer header. */ -struct mlx5_mprq_buf { - struct rte_mempool *mp; - uint16_t refcnt; /* Atomically accessed refcnt. */ - uint8_t pad[RTE_PKTMBUF_HEADROOM]; /* Headroom for the first packet. */ - struct rte_mbuf_ext_shared_info shinfos[]; - /* - * Shared information per stride. - * More memory will be allocated for the first stride head-room and for - * the strides data. - */ -} __rte_cache_aligned; - /* Get pointer to the first stride. */ #define mlx5_mprq_buf_addr(ptr, strd_n) (RTE_PTR_ADD((ptr), \ sizeof(struct mlx5_mprq_buf) + \ @@ -255,7 +242,6 @@ int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx, uint16_t mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n); void mlx5_rxq_initialize(struct mlx5_rxq_data *rxq); __rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec); -void mlx5_mprq_buf_free_cb(void *addr, void *opaque); void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf); uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 174899e661..e1a4ded688 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -21,6 +21,7 @@ #include #include +#include #include "mlx5_defs.h" #include "mlx5.h" diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index b400295e7d..876aa14ae6 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -43,30 +43,4 @@ int mlx5_queue_state_modify_primary(struct rte_eth_dev *dev, int mlx5_queue_state_modify(struct rte_eth_dev *dev, struct mlx5_mp_arg_queue_state_modify *sm); -/* mlx5_mr.c */ - -void mlx5_mr_flush_local_cache(struct mlx5_mr_ctrl *mr_ctrl); -int mlx5_net_dma_map(struct rte_device *rte_dev, void *addr, uint64_t iova, - size_t len); -int mlx5_net_dma_unmap(struct rte_device *rte_dev, void *addr, uint64_t iova, - size_t len); - -/** - * Get Memory Pool (MP) from mbuf. If mbuf is indirect, the pool from which the - * cloned mbuf is allocated is returned instead. - * - * @param buf - * Pointer to mbuf. - * - * @return - * Memory pool where data is located for given mbuf. - */ -static inline struct rte_mempool * -mlx5_mb2mp(struct rte_mbuf *buf) -{ - if (unlikely(RTE_MBUF_CLONED(buf))) - return rte_mbuf_from_indirect(buf)->pool; - return buf->pool; -} - #endif /* RTE_PMD_MLX5_RXTX_H_ */ diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index 1f124b92e6..de2e284929 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -235,10 +235,6 @@ void mlx5_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, int mlx5_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t tx_queue_id, struct rte_eth_burst_mode *mode); -/* mlx5_mr.c */ - -uint32_t mlx5_tx_mb2mr_bh(struct mlx5_txq_data *txq, struct rte_mbuf *mb); - /* mlx5_tx_empw.c */ MLX5_TXOFF_PRE_DECL(full_empw); @@ -356,12 +352,12 @@ __mlx5_uar_write64(uint64_t val, void *addr, rte_spinlock_t *lock) #endif /** - * Query LKey from a packet buffer for Tx. If not found, add the mempool. + * Query LKey from a packet buffer for Tx. * * @param txq * Pointer to Tx queue structure. - * @param addr - * Address to search. + * @param mb + * Pointer to mbuf. * * @return * Searched LKey on success, UINT32_MAX on no match. @@ -370,19 +366,12 @@ static __rte_always_inline uint32_t mlx5_tx_mb2mr(struct mlx5_txq_data *txq, struct rte_mbuf *mb) { struct mlx5_mr_ctrl *mr_ctrl = &txq->mr_ctrl; - uintptr_t addr = (uintptr_t)mb->buf_addr; - uint32_t lkey; - - /* Check generation bit to see if there's any change on existing MRs. */ - if (unlikely(*mr_ctrl->dev_gen_ptr != mr_ctrl->cur_gen)) - mlx5_mr_flush_local_cache(mr_ctrl); - /* Linear search on MR cache array. */ - lkey = mlx5_mr_lookup_lkey(mr_ctrl->cache, &mr_ctrl->mru, - MLX5_MR_CACHE_N, addr); - if (likely(lkey != UINT32_MAX)) - return lkey; + struct mlx5_txq_ctrl *txq_ctrl = + container_of(txq, struct mlx5_txq_ctrl, txq); + struct mlx5_priv *priv = txq_ctrl->priv; + /* Take slower bottom-half on miss. */ - return mlx5_tx_mb2mr_bh(txq, mb); + return mlx5_mr_mb2mr(priv->sh->cdev, &priv->mp_id, mr_ctrl, mb); } /** diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c index 7f900b67ee..b8a513e1fa 100644 --- a/drivers/regex/mlx5/mlx5_regex.c +++ b/drivers/regex/mlx5/mlx5_regex.c @@ -36,9 +36,11 @@ const struct rte_regexdev_ops mlx5_regexdev_ops = { }; int -mlx5_regex_start(struct rte_regexdev *dev __rte_unused) +mlx5_regex_start(struct rte_regexdev *dev) { - return 0; + struct mlx5_regex_priv *priv = dev->data->dev_private; + + return mlx5_dev_mempool_subscribe(priv->cdev); } int -- 2.25.1