From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A1C0AA034F; Sat, 16 Oct 2021 11:13:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 93131410DC; Sat, 16 Oct 2021 11:13:13 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2053.outbound.protection.outlook.com [40.107.236.53]) by mails.dpdk.org (Postfix) with ESMTP id F21EE4067C for ; Sat, 16 Oct 2021 11:13:12 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gzLCqMJi4/mUBq0gLfHH2vKv3d2Rd2/GGuwEFe0zu8S7oy+d1OlRjW+NHu/5l0+SA9ZEJYDVKjGEl7iA2lqt72FJZRLkuZj+SClzpGAtw8vKxSXs7Nl5zcjlxueSSjL8GUTWzubgHygDvap97OoQDsw61x8lC6bTencdBeWMd837FbkB1UYDUeA9hGOLGf6UWnTNYYFX65JGXxHRXkLDqI5UPr30WHzsK+htA3xVAAEB3CCpOYgREiNTGG9ulN+jy5eqLEMqoxv78sUrUf/V+5mEmOO6TDlz/ZDGA9rQEPvWybXRZ+vC6Qn/6/eDVJhgHN66rNvgNPKStkTrPWYemQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=X/p8/Rmcv/N+ONw1cIwTBAkLl21AKsSfFiVeqOLV1jE=; b=Qgzf9SOsqDLX5/d41XdNBQhpVGhuEcgK7sf1qhASujUX4LCJ4OL881kRGCCRJFlG2pk9iKfnaPiMYc4witVO25/M2H37bwj+XiW4RyfxKsEmLVTp9UMvTD5E7odVddzU6HpIgME+wYTHt70Ye3Vm78LlFPJ+mavQ13ijX6YQDiGEeTE6eYnYC25NFBgX59MIVfMPv0sC3559Zx8VqQnainrsruwoMJRuaJth1wkSZPyFPRGihswu2TS7c2bAMzkF5qHub0bHN9GPdRY9VRAsC1+HetaSg1taG2roWLOCzc4Cr3SCmtSCMtCGTynRVBzDlmmLDfln6PIkIgDLBepIvQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=X/p8/Rmcv/N+ONw1cIwTBAkLl21AKsSfFiVeqOLV1jE=; b=ExT3NM6F7SbukRR9n6ZsAds1NMwy5UX9R/O8zCyHoE3sV8yq/+538Y9p9Y8fKbFJN5DfZpjbobS7OVxmWKU178IA0dJPSDmcGwAQFTSh+wXCDeK8QQE0ZcQpXeCjQrOTSXNGIyXe+MIRmM+msE0joSZerEuRLXVMvcq5DlPSJ/Apq6ob9V8T5WFOg9V58Zv1viWy2I9ZbA7E9pCAgFgVMCix8QRmbZAcwYzs7sjzHPholr5Sq2Q+vtrP7Tkrn0VWNu+QYvRQ3QuU3HKsMR0M5Jwbr+ugjxOICISOMUDmWxbgLBKpwLRUyI2ZtRVbOGH8/EGKy20AQD48ACu4sbgIaQ== Received: from BN6PR1201CA0008.namprd12.prod.outlook.com (2603:10b6:405:4c::18) by MN2PR12MB3149.namprd12.prod.outlook.com (2603:10b6:208:d2::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.17; Sat, 16 Oct 2021 09:13:10 +0000 Received: from BN8NAM11FT052.eop-nam11.prod.protection.outlook.com (2603:10b6:405:4c:cafe::dd) by BN6PR1201CA0008.outlook.office365.com (2603:10b6:405:4c::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.14 via Frontend Transport; Sat, 16 Oct 2021 09:13:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT052.mail.protection.outlook.com (10.13.177.210) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4608.15 via Frontend Transport; Sat, 16 Oct 2021 09:13:10 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sat, 16 Oct 2021 09:13:06 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Sat, 16 Oct 2021 17:12:05 +0800 Message-ID: <20211016091214.1831902-6-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211016091214.1831902-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> <20211016091214.1831902-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 05e2085d-70e4-468f-3184-08d990852c9c X-MS-TrafficTypeDiagnostic: MN2PR12MB3149: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:233; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ZOWMNlESYqsRXRwkmutnR1PmoIMmut9H/8TnJjJixH0yS8xMJniq4ck+OMC/xApMoBwL4tWGZ0oS3VnX4gtY+gifAY54N8UNgl8hLV6uFy3QuFM+Xks0FqABVLqeLcyZurU2C5BDRNkfPxBsvwfUN8Z9MukQ3RYyyQPAZMLYPVgDI6SYQr3fDCjwIS+kPXxdn3mDST6qph4MS34KY/nkYl7Z2QTh3opGzU16hODnWDK3YSnJBUaHxXG/q8cj+rOZL1vodlhGiVVGHIJM2FWByTjHqL1jmC2sdiss64YM1aJ/H7gsTgmORts7LRbma9qN0wrKPvPHP8OBkKiYKwwZR5wAeEMqdQADxMHz/FoymlFC99E8ACsQXnTEPdeVPpBPVD6MZclG3F92SnkeuLXY9KmjbVnXAbGT6Oy0MYLjd9E6vyvbUOYzsj1BuarOEFsCdPwBmOwScak50iC0asM7zYl/6MYQHWwyTBZIfGeq3dCAyphsZJBqTKBlyavUYMBFy1nwLXXZeE1h+7u0HsPOlC2Z6tJLuwUefieEcgqYYZafuW7ROAbqx0P7o03HY4ZDJeFLLOG51ectYvADAe73hkvmMoqf1iM0MrEt4hnMFe1v0gsbc5kOD/lxxIdbXAfFsn68dsuEXG1EOqF+J41L3eE+j5LFYOlXCGolZMQZB/6SlSZTmNmb7dskkJUezVzr206cMd4xKoyxTEKaA0GQyA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(83380400001)(6286002)(356005)(26005)(6916009)(107886003)(508600001)(4326008)(86362001)(55016002)(7636003)(7696005)(316002)(70206006)(70586007)(54906003)(5660300002)(47076005)(36756003)(82310400003)(16526019)(2906002)(426003)(1076003)(186003)(8936002)(36860700001)(8676002)(2616005)(336012); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Oct 2021 09:13:10.1430 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 05e2085d-70e4-468f-3184-08d990852c9c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT052.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3149 Subject: [dpdk-dev] [PATCH v2 05/13] net/mlx5: split multiple packet Rq memory pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Port info is invisible from shared Rx queue, split MPR mempool from device to Rx queue, also changed pool flag to mp_sc. Signed-off-by: Xueming Li --- drivers/net/mlx5/mlx5.c | 1 - drivers/net/mlx5/mlx5_rx.h | 4 +- drivers/net/mlx5/mlx5_rxq.c | 109 ++++++++++++-------------------- drivers/net/mlx5/mlx5_trigger.c | 10 ++- 4 files changed, 47 insertions(+), 77 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 45ccfe27845..1033c29cb82 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1608,7 +1608,6 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_drop_action_destroy(dev); if (priv->mreg_cp_tbl) mlx5_hlist_destroy(priv->mreg_cp_tbl); - mlx5_mprq_free_mp(dev); if (priv->sh->ct_mng) mlx5_flow_aso_ct_mng_close(priv->sh); mlx5_os_free_shared_dr(priv); diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index d44c8078dea..a8e0c3162b0 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -179,8 +179,8 @@ struct mlx5_rxq_ctrl { extern uint8_t rss_hash_default_key[]; unsigned int mlx5_rxq_cqe_num(struct mlx5_rxq_data *rxq_data); -int mlx5_mprq_free_mp(struct rte_eth_dev *dev); -int mlx5_mprq_alloc_mp(struct rte_eth_dev *dev); +int mlx5_mprq_free_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl); +int mlx5_mprq_alloc_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl); int mlx5_rx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id); int mlx5_rx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id); int mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t queue_id); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 1cb99de1ae7..f29a8143967 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1087,7 +1087,7 @@ mlx5_mprq_buf_init(struct rte_mempool *mp, void *opaque_arg, } /** - * Free mempool of Multi-Packet RQ. + * Free RXQ mempool of Multi-Packet RQ. * * @param dev * Pointer to Ethernet device. @@ -1096,16 +1096,15 @@ mlx5_mprq_buf_init(struct rte_mempool *mp, void *opaque_arg, * 0 on success, negative errno value on failure. */ int -mlx5_mprq_free_mp(struct rte_eth_dev *dev) +mlx5_mprq_free_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl) { - struct mlx5_priv *priv = dev->data->dev_private; - struct rte_mempool *mp = priv->mprq_mp; - unsigned int i; + struct mlx5_rxq_data *rxq = &rxq_ctrl->rxq; + struct rte_mempool *mp = rxq->mprq_mp; if (mp == NULL) return 0; - DRV_LOG(DEBUG, "port %u freeing mempool (%s) for Multi-Packet RQ", - dev->data->port_id, mp->name); + DRV_LOG(DEBUG, "port %u queue %hu freeing mempool (%s) for Multi-Packet RQ", + dev->data->port_id, rxq->idx, mp->name); /* * If a buffer in the pool has been externally attached to a mbuf and it * is still in use by application, destroying the Rx queue can spoil @@ -1123,34 +1122,28 @@ mlx5_mprq_free_mp(struct rte_eth_dev *dev) return -rte_errno; } rte_mempool_free(mp); - /* Unset mempool for each Rx queue. */ - for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - - if (rxq == NULL) - continue; - rxq->mprq_mp = NULL; - } - priv->mprq_mp = NULL; + rxq->mprq_mp = NULL; return 0; } /** - * Allocate a mempool for Multi-Packet RQ. All configured Rx queues share the - * mempool. If already allocated, reuse it if there're enough elements. + * Allocate RXQ a mempool for Multi-Packet RQ. + * If already allocated, reuse it if there're enough elements. * Otherwise, resize it. * * @param dev * Pointer to Ethernet device. + * @param rxq_ctrl + * Pointer to RXQ. * * @return * 0 on success, negative errno value on failure. */ int -mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) +mlx5_mprq_alloc_mp(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *rxq_ctrl) { - struct mlx5_priv *priv = dev->data->dev_private; - struct rte_mempool *mp = priv->mprq_mp; + struct mlx5_rxq_data *rxq = &rxq_ctrl->rxq; + struct rte_mempool *mp = rxq->mprq_mp; char name[RTE_MEMPOOL_NAMESIZE]; unsigned int desc = 0; unsigned int buf_len; @@ -1158,28 +1151,15 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) unsigned int obj_size; unsigned int strd_num_n = 0; unsigned int strd_sz_n = 0; - unsigned int i; - unsigned int n_ibv = 0; - if (!mlx5_mprq_enabled(dev)) + if (rxq_ctrl == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) return 0; - /* Count the total number of descriptors configured. */ - for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = container_of - (rxq, struct mlx5_rxq_ctrl, rxq); - - if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) - continue; - n_ibv++; - desc += 1 << rxq->elts_n; - /* Get the max number of strides. */ - if (strd_num_n < rxq->strd_num_n) - strd_num_n = rxq->strd_num_n; - /* Get the max size of a stride. */ - if (strd_sz_n < rxq->strd_sz_n) - strd_sz_n = rxq->strd_sz_n; - } + /* Number of descriptors configured. */ + desc = 1 << rxq->elts_n; + /* Get the max number of strides. */ + strd_num_n = rxq->strd_num_n; + /* Get the max size of a stride. */ + strd_sz_n = rxq->strd_sz_n; MLX5_ASSERT(strd_num_n && strd_sz_n); buf_len = (1 << strd_num_n) * (1 << strd_sz_n); obj_size = sizeof(struct mlx5_mprq_buf) + buf_len + (1 << strd_num_n) * @@ -1196,7 +1176,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) * this Mempool gets available again. */ desc *= 4; - obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ * n_ibv; + obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ; /* * rte_mempool_create_empty() has sanity check to refuse large cache * size compared to the number of elements. @@ -1209,50 +1189,41 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) DRV_LOG(DEBUG, "port %u mempool %s is being reused", dev->data->port_id, mp->name); /* Reuse. */ - goto exit; - } else if (mp != NULL) { - DRV_LOG(DEBUG, "port %u mempool %s should be resized, freeing it", - dev->data->port_id, mp->name); + return 0; + } + if (mp != NULL) { + DRV_LOG(DEBUG, "port %u queue %u mempool %s should be resized, freeing it", + dev->data->port_id, rxq->idx, mp->name); /* * If failed to free, which means it may be still in use, no way * but to keep using the existing one. On buffer underrun, * packets will be memcpy'd instead of external buffer * attachment. */ - if (mlx5_mprq_free_mp(dev)) { + if (mlx5_mprq_free_mp(dev, rxq_ctrl) != 0) { if (mp->elt_size >= obj_size) - goto exit; + return 0; else return -rte_errno; } } - snprintf(name, sizeof(name), "port-%u-mprq", dev->data->port_id); + snprintf(name, sizeof(name), "port-%u-queue-%hu-mprq", + dev->data->port_id, rxq->idx); mp = rte_mempool_create(name, obj_num, obj_size, MLX5_MPRQ_MP_CACHE_SZ, 0, NULL, NULL, mlx5_mprq_buf_init, - (void *)((uintptr_t)1 << strd_num_n), - dev->device->numa_node, 0); + (void *)(((uintptr_t)1) << strd_num_n), + dev->device->numa_node, MEMPOOL_F_SC_GET); if (mp == NULL) { DRV_LOG(ERR, - "port %u failed to allocate a mempool for" + "port %u queue %hu failed to allocate a mempool for" " Multi-Packet RQ, count=%u, size=%u", - dev->data->port_id, obj_num, obj_size); + dev->data->port_id, rxq->idx, obj_num, obj_size); rte_errno = ENOMEM; return -rte_errno; } - priv->mprq_mp = mp; -exit: - /* Set mempool for each Rx queue. */ - for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = container_of - (rxq, struct mlx5_rxq_ctrl, rxq); - - if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) - continue; - rxq->mprq_mp = mp; - } - DRV_LOG(INFO, "port %u Multi-Packet RQ is configured", - dev->data->port_id); + rxq->mprq_mp = mp; + DRV_LOG(INFO, "port %u queue %hu Multi-Packet RQ is configured", + dev->data->port_id, rxq->idx); return 0; } @@ -1717,8 +1688,10 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED; } if (!__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED)) { - if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh); + mlx5_mprq_free_mp(dev, rxq_ctrl); + } LIST_REMOVE(rxq_ctrl, next); mlx5_free(rxq_ctrl); (*priv->rxqs)[idx] = NULL; diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index c3adf5082e6..0753dbad053 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -138,11 +138,6 @@ mlx5_rxq_start(struct rte_eth_dev *dev) unsigned int i; int ret = 0; - /* Allocate/reuse/resize mempool for Multi-Packet RQ. */ - if (mlx5_mprq_alloc_mp(dev)) { - /* Should not release Rx queues but return immediately. */ - return -rte_errno; - } DRV_LOG(DEBUG, "Port %u device_attr.max_qp_wr is %d.", dev->data->port_id, priv->sh->device_attr.max_qp_wr); DRV_LOG(DEBUG, "Port %u device_attr.max_sge is %d.", @@ -153,8 +148,11 @@ mlx5_rxq_start(struct rte_eth_dev *dev) if (!rxq_ctrl) continue; if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { - /* Pre-register Rx mempools. */ if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) { + /* Allocate/reuse/resize mempool for MPRQ. */ + if (mlx5_mprq_alloc_mp(dev, rxq_ctrl) < 0) + goto error; + /* Pre-register Rx mempools. */ mlx5_mr_update_mp(dev, &rxq_ctrl->rxq.mr_ctrl, rxq_ctrl->rxq.mprq_mp); } else { -- 2.33.0